WorldWideScience

Sample records for hand gestures eye

  1. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    Science.gov (United States)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  2. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...... mobile phone screens. The user study shows that the method detects a set of defined gestures reliably.......A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  3. Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation

    Science.gov (United States)

    Argyriou, Paraskevi; Mohr, Christine; Kita, Sotaro

    2017-01-01

    Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes…

  4. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-04-01

    Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.

  5. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    Science.gov (United States)

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  6. Hand use and gestural communication in chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hopkins, W D; Leavens, D A

    1998-03-01

    Hand use in gestural communication was examined in 115 captive chimpanzees (Pan troglodytes). Hand use was measured in subjects while they gestured to food placed out of their reach. The distribution of hand use was examined in relation to sex, age, rearing history, gesture type, and whether the subjects vocalized while gesturing. Overall, significantly more chimpanzees, especially females and adults, gestured with their right than with their left hand. Foods begs were more lateralized to the right hand than pointing, and a greater prevalence of right-hand gesturing was found in subjects who simultaneously vocalized than those who did not. Taken together, these data suggest that referential, intentional communicative behaviors, in the form of gestures, are lateralized to the left hemisphere in chimpanzees.

  7. Hand Gesture Recognition with Leap Motion

    OpenAIRE

    Du, Youchen; Liu, Shenglan; Feng, Lin; Chen, Menghui; Wu, Jie

    2017-01-01

    The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly. This paper proposes a novel hand gesture recognition system with Leap Motion Controller. A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multi-class SVM classifier to recognize performed gesture, dimension reduction and feature weight...

  8. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  9. Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval

    Directory of Open Access Journals (Sweden)

    Alex Ming Hui Wong

    2016-01-01

    Full Text Available One of the latest authentication methods is by discerning human gestures. Previous research has shown that different people can develop distinct gesture behaviours even when executing the same gesture. Hand gesture is one of the most commonly used gestures in both communication and authentication research since it requires less room to perform as compared to other bodily gestures. There are different types of hand gesture and they have been researched by many researchers, but stationary hand gesture has yet to be thoroughly explored. There are a number of disadvantages and flaws in general hand gesture authentication such as reliability, usability, and computational cost. Although stationary hand gesture is not able to solve all these problems, it still provides more benefits and advantages over other hand gesture authentication methods, such as making gesture into a motion flow instead of trivial image capturing, and requires less room to perform, less vision cue needed during performance, and so forth. In this paper, we introduce stationary hand gesture authentication by implementing edit distance on finger pointing direction interval (ED-FPDI from hand gesture to model behaviour-based authentication system. The accuracy rate of the proposed ED-FPDI shows promising results.

  10. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  11. Touch-less interaction with medical images using hand & foot gestures

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Smith, Jeremiah; Sousa, Miguel

    2013-01-01

    control. In this paper, we present a system for gesture-based interaction with medical images based on a single wristband sensor and capacitive floor sensors, allowing for hand and foot gesture input. The first limited evaluation of the system showed an acceptable level of accuracy for 12 different hand...... & foot gestures; also users found that our combined hand and foot based gestures are intuitive for providing input....

  12. Web-based interactive drone control using hand gesture

    Science.gov (United States)

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  13. Web-based interactive drone control using hand gesture.

    Science.gov (United States)

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  14. White Lies in Hand: Are Other-Oriented Lies Modified by Hand Gestures? Possibly Not.

    Science.gov (United States)

    Cantarero, Katarzyna; Parzuchowski, Michal; Dukala, Karolina

    2017-01-01

    Previous studies have shown that the hand-over-heart gesture is related to being more honest as opposed to using self-centered dishonesty. We assumed that the hand-over-heart gesture would also relate to other-oriented dishonesty, though the latter differs highly from self-centered lying. In Study 1 ( N = 79), we showed that performing a hand-over-heart gesture diminished the tendency to use other-oriented white lies and that the fingers crossed behind one's back gesture was not related to higher dishonesty. We then pre-registered and conducted Study 2 ( N = 88), which was designed following higher methodological standards than Study 1. Contrary, to the findings of Study 1, we found that using the hand-over-heart gesture did not result in refraining from using other-oriented white lies. We discuss the findings of this failed replication indicating the importance of strict methodological guidelines in conducting research and also reflect on relatively small effect sizes related to some findings in embodied cognition.

  15. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  16. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  17. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-01-01

    estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized

  18. White Lies in Hand: Are Other-Oriented Lies Modified by Hand Gestures? Possibly Not

    Directory of Open Access Journals (Sweden)

    Katarzyna Cantarero

    2017-06-01

    Full Text Available Previous studies have shown that the hand-over-heart gesture is related to being more honest as opposed to using self-centered dishonesty. We assumed that the hand-over-heart gesture would also relate to other-oriented dishonesty, though the latter differs highly from self-centered lying. In Study 1 (N = 79, we showed that performing a hand-over-heart gesture diminished the tendency to use other-oriented white lies and that the fingers crossed behind one’s back gesture was not related to higher dishonesty. We then pre-registered and conducted Study 2 (N = 88, which was designed following higher methodological standards than Study 1. Contrary, to the findings of Study 1, we found that using the hand-over-heart gesture did not result in refraining from using other-oriented white lies. We discuss the findings of this failed replication indicating the importance of strict methodological guidelines in conducting research and also reflect on relatively small effect sizes related to some findings in embodied cognition.

  19. Hand gesture recognition by analysis of codons

    Science.gov (United States)

    Ramachandra, Poornima; Shrikhande, Neelima

    2007-09-01

    The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.

  20. Adaptation in Gesture: Converging Hands or Converging Minds?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; Maes, Alfons; Swerts, Marc

    2012-01-01

    Interlocutors sometimes repeat each other's co-speech hand gestures. In three experiments, we investigate to what extent the copying of such gestures' form is tied to their meaning in the linguistic context, as well as to interlocutors' representations of this meaning at the conceptual level. We found that gestures were repeated only if they could…

  1. Finger tips detection for two handed gesture recognition

    Science.gov (United States)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  2. A biometric authentication model using hand gesture images.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok

    2013-10-30

    A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.

  3. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  4. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  5. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  6. Effects of eye contact and iconic gestures on message retention in human-robot interaction

    NARCIS (Netherlands)

    Dijk, van E.T.; Torta, E.; Cuijpers, R.H.

    2013-01-01

    The effects of iconic gestures and eye contact on message retention in human-robot interaction were investigated in a series of experiments. A humanoid robot gave short verbal messages to participants, accompanied either by iconic gestures or no gestures while making eye contact with the participant

  7. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  8. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  9. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  10. A real-time vision-based hand gesture interaction system for virtual EAST

    International Nuclear Information System (INIS)

    Wang, K.R.; Xiao, B.J.; Xia, J.Y.; Li, Dan; Luo, W.L.

    2016-01-01

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  11. A real-time vision-based hand gesture interaction system for virtual EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.R., E-mail: wangkr@mail.ustc.edu.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J.; Xia, J.Y. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Luo, W.L. [709th Research Institute, Shipbuilding Industry Corporation (China)

    2016-11-15

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  12. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  13. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  14. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  15. Effects of the restriction of hand gestures on disfluency.

    OpenAIRE

    Finlayson, Sheena; Forrest, Victoria; Lickley, Robin; Beck, Janet M

    2003-01-01

    This paper describes an experimental pilot study of disfluency and gesture rates in spontaneous speech where speakers perform a communication task in three conditions: hands free, one arm immobilized, both arms immobilized. Previous work suggests that the restriction of the ability to gesture can have an impact on the fluency of speech. In particular, it has been found that the inability to produce iconic gestures, which depict actions and objects, results in a higher rate of disfluency. Mode...

  16. Hand gestures support word learning in patients with hippocampal amnesia.

    Science.gov (United States)

    Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C

    2018-06-01

    Co-speech hand gesture facilitates learning and memory, yet the cognitive and neural mechanisms supporting this remain unclear. One possibility is that motor information in gesture may engage procedural memory representations. Alternatively, iconic information from gesture may contribute to declarative memory representations mediated by the hippocampus. To investigate these alternatives, we examined gesture's effects on word learning in patients with hippocampal damage and declarative memory impairment, with intact procedural memory, and in healthy and in brain-damaged comparison groups. Participants learned novel label-object pairings while producing gesture, observing gesture, or observing without gesture. After a delay, recall and object identification were assessed. Unsurprisingly, amnesic patients were unable to recall the labels at test. However, they correctly identified objects at above chance levels, but only if they produced a gesture at encoding. Comparison groups performed well above chance at both recall and object identification regardless of gesture. These findings suggest that gesture production may support word learning by engaging nondeclarative (procedural) memory. © 2018 Wiley Periodicals, Inc.

  17. Using virtual data for training deep model for hand gesture recognition

    Science.gov (United States)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  18. Geometry and Gesture-Based Features from Saccadic Eye-Movement as a Biometric in Radiology

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Tracy [Texas A& M University, College Station; Tourassi, Georgia [ORNL; Yoon, Hong-Jun [ORNL; Alamudun, Folami T. [ORNL

    2017-07-01

    In this study, we present a novel application of sketch gesture recognition on eye-movement for biometric identification and estimating task expertise. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus for this study. Sketch gesture recognition techniques were employed to extract geometric and gesture-based features from saccadic eye-movements. Our results show that saccadic eye-movement, characterized using sketch-based features, result in more accurate models for predicting individual identity and level of expertise than more traditional eye-tracking features.

  19. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  20. "Slight" of hand: the processing of visually degraded gestures with speech.

    Science.gov (United States)

    Kelly, Spencer D; Hansen, Bruce C; Clark, David T

    2012-01-01

    Co-speech hand gestures influence language comprehension. The present experiment explored what part of the visual processing system is optimized for processing these gestures. Participants viewed short video clips of speech and gestures (e.g., a person saying "chop" or "twist" while making a chopping gesture) and had to determine whether the two modalities were congruent or incongruent. Gesture videos were designed to stimulate the parvocellular or magnocellular visual pathways by filtering out low or high spatial frequencies (HSF versus LSF) at two levels of degradation severity (moderate and severe). Participants were less accurate and slower at processing gesture and speech at severe versus moderate levels of degradation. In addition, they were slower for LSF versus HSF stimuli, and this difference was most pronounced in the severely degraded condition. However, exploratory item analyses showed that the HSF advantage was modulated by the range of motion and amount of motion energy in each video. The results suggest that hand gestures exploit a wide range of spatial frequencies, and depending on what frequencies carry the most motion energy, parvocellular or magnocellular visual pathways are maximized to quickly and optimally extract meaning.

  1. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    Science.gov (United States)

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  2. Good and bad in the hands of politicians: spontaneous gestures during positive and negative speech.

    Directory of Open Access Journals (Sweden)

    Daniel Casasanto

    2010-07-01

    Full Text Available According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether 'body-specific' associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences.We analyzed speech and gesture (3012 spoken clauses, 1747 gestures from the final debates of the 2004 and 2008 US presidential elections, which involved two right-handers (Kerry, Bush and two left-handers (Obama, McCain. Blind, independent coding of speech and gesture allowed objective hypothesis testing. Right- and left-handed candidates showed contrasting associations between gesture and speech. In both of the left-handed candidates, left-hand gestures were associated more strongly with positive-valence clauses and right-hand gestures with negative-valence clauses; the opposite pattern was found in both right-handed candidates.Speakers associate positive messages more strongly with dominant hand gestures and negative messages with non-dominant hand gestures, revealing a hidden link between action and emotion. This pattern cannot be explained by conventions in language or culture, which associate 'good' with 'right' but not with 'left'; rather, results support and extend the body-specificity hypothesis. Furthermore, results suggest that the hand speakers use to gesture may have unexpected (and probably unintended communicative value, providing the listener with a subtle index of how the speaker feels about the content of the co-occurring speech.

  3. The Role of Conversational Hand Gestures in a Narrative Task

    Science.gov (United States)

    Jacobs, Naomi; Garnham, Alan

    2007-01-01

    The primary functional role of conversational hand gestures in narrative discourse is disputed. A novel experimental technique investigated whether gestures function primarily to aid speech production by the speaker, or communication to the listener. The experiment involved repeated narration of a cartoon story or stories to a single or multiple…

  4. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving

    Science.gov (United States)

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents’ gestures. The oldest group was positively affected by the total frequency of parents’ gestures, and in particular, parents’ use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 – 6 years. PMID:26848192

  5. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving.

    Science.gov (United States)

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-11-01

    Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5 - 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents' gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents' gestures. The oldest group was positively affected by the total frequency of parents' gestures, and in particular, parents' use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 - 6 years.

  6. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    Directory of Open Access Journals (Sweden)

    Seongwan Kim

    2017-01-01

    Full Text Available The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor, usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  7. Giving cognition a helping hand: the effect of congruent gestures on object name retrieval.

    Science.gov (United States)

    Pine, Karen J; Reeves, Lindsey; Howlett, Neil; Fletcher, Ben C

    2013-02-01

    The gestures that accompany speech are more than just arbitrary hand movements or communicative devices. They are simulated actions that can both prime and facilitate speech and cognition. This study measured participants' reaction times for naming degraded images of objects when simultaneously adopting a gesture that was either congruent with the target object, incongruent with it, and when not making any hand gesture. A within-subjects design was used, with participants (N= 122) naming 10 objects under each condition. Participants named the objects significantly faster when adopting a congruent gesture than when not gesturing at all. Adopting an incongruent gesture resulted in significantly slower naming times. The findings are discussed in the context of the intrapersonal cognitive and facilitatory effects of gestures and underline the relatedness between language, action, and cognition. © 2012 The British Psychological Society.

  8. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  9. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    Science.gov (United States)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  10. Meaningful gesture in monkeys? Investigating whether mandrills create social culture.

    Directory of Open Access Journals (Sweden)

    Mark E Laidre

    Full Text Available BACKGROUND: Human societies exhibit a rich array of gestures with cultural origins. Often these gestures are found exclusively in local populations, where their meaning has been crafted by a community into a shared convention. In nonhuman primates like African monkeys, little evidence exists for such culturally-conventionalized gestures. METHODOLOGY/PRINCIPAL FINDINGS: Here I report a striking gesture unique to a single community of mandrills (Mandrillus sphinx among nineteen studied across North America, Africa, and Europe. The gesture was found within a community of 23 mandrills where individuals old and young, female and male covered their eyes with their hands for periods which could exceed 30 min, often while simultaneously raising their elbow prominently into the air. This 'Eye covering' gesture has been performed within the community for a decade, enduring deaths, removals, and births, and it persists into the present. Differential responses to Eye covering versus controls suggested that the gesture might have a locally-respected meaning, potentially functioning over a distance to inhibit interruptions as a 'do not disturb' sign operates. CONCLUSIONS/SIGNIFICANCE: The creation of this gesture by monkeys suggests that the ability to cultivate shared meanings using novel manual acts may be distributed more broadly beyond the human species. Although logistically difficult with primates, the translocation of gesturers between communities remains critical to experimentally establishing the possible cultural origin and transmission of nonhuman gestures.

  11. Different visual exploration of tool-related gestures in left hemisphere brain damaged patients is associated with poor gestural imitation.

    Science.gov (United States)

    Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M

    2015-05-01

    According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Hands in space: gesture interaction with augmented-reality interfaces.

    Science.gov (United States)

    Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai

    2014-01-01

    Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.

  13. Investigation of the Reationship between Hand Gestures and Speech in Adults Who Stutter

    Directory of Open Access Journals (Sweden)

    Ali Barikrou

    2008-12-01

    Full Text Available Objective: Gestures of the hands and arms have long been observed to accompany speech in spontaneous conversation. However, the way in which these two modes of expression are related in production is not yet fully understood. So, the present study aims to investigate the spontaneous gestures that accompany speech in adults who stutter in comparison to fluent controls.  Materials & Methods: In this cross-sectional and comparative research, ten adults who stutter were selected randomly from speech and language pathology clinics and compared with ten healthy persons as control group who were matched with stutterers according to sex, age and education. The cartoon story-retelling task used to elicit spontaneous gestures that accompany speech. Participants were asked to watch the animation carefully and then retell the storyline in as much detail as possible to a listener sitting across from him or her and his or her narration was video recorded simultaneously. Then recorded utterances and gestures were analyzed. The statistical methods such as Kolmogorov- Smirnov and Independent t-test were used for data analyzing. Results: The results indicated that stutterers in comparison to controls in average use fewer iconic gestures in their narration (P=0.005. Also, stutterers in comparison to controls in average use fewer iconic gestures per each utterance and word (P=0.019. Furthermore, the execution of gesture production during moments of dysfluency revealed that more than 70% of the gestures produced with stuttering were frozen or abandoned at the moment of dysfluency. Conclusion: It seems gesture and speech have such an intricate and deep association that show similar frequency and timing patterns and move completely parallel to each other in such a way that deficit in speech results in deficiency in hand gesture.

  14. Hand movements with a phase structure and gestures that depict action stem from a left hemispheric system of conceptualization.

    Science.gov (United States)

    Helmich, I; Lausberg, H

    2014-10-01

    The present study addresses the previously discussed controversy on the contribution of the right and left cerebral hemispheres to the production and conceptualization of spontaneous hand movements and gestures. Although it has been shown that each hemisphere contains the ability to produce hand movements, results of left hemispherically lateralized motor functions challenge the view of a contralateral hand movement production system. To examine hemispheric specialization in hand movement and gesture production, ten right-handed participants were tachistoscopically presented pictures of everyday life actions. The participants were asked to demonstrate with their hands, but without speaking what they had seen on the drawing. Two independent blind raters evaluated the videotaped hand movements and gestures employing the Neuropsychological Gesture Coding System. The results showed that the overall frequency of right- and left-hand movements is equal independent of stimulus lateralization. When hand movements were analyzed considering their Structure, the presentation of the action stimuli to the left hemisphere resulted in more hand movements with a phase structure than the presentation to the right hemisphere. Furthermore, the presentation to the left hemisphere resulted in more right and left-hand movements with a phase structure, whereas the presentation to the right hemisphere only increased contralateral left-hand movements with a phase structure as compared to hand movements without a phase structure. Gestures that depict action were primarily displayed in response to stimuli presented in the right visual field than in the left one. The present study shows that both hemispheres possess the faculty to produce hand movements in response to action stimuli. However, the left hemisphere dominates the production of hand movements with a phase structure and gestures that depict action. We therefore conclude that hand movements with a phase structure and gestures that

  15. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  16. Basic Hand Gestures Classification Based on Surface Electromyography

    Directory of Open Access Journals (Sweden)

    Aleksander Palkowski

    2016-01-01

    Full Text Available This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method.

  17. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    NARCIS (Netherlands)

    W.T.J.L. Pouw (Wim); M.-F. Mavilidi (Myrto-Foteini); T.A.J.M. van Gog (Tamara); G.W.C. Paas (Fred)

    2016-01-01

    textabstractNon-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One

  18. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    NARCIS (Netherlands)

    Pouw, Wim T J L; Mavilidi, Myrto Foteini; van Gog, Tamara; Paas, Fred

    2016-01-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that

  19. Exploring the role of hand gestures in learning novel phoneme contrasts and vocabulary in a second language

    Science.gov (United States)

    Kelly, Spencer D.; Hirata, Yukari; Manansala, Michael; Huang, Jessica

    2014-01-01

    Co-speech hand gestures are a type of multimodal input that has received relatively little attention in the context of second language learning. The present study explored the role that observing and producing different types of gestures plays in learning novel speech sounds and word meanings in an L2. Naïve English-speakers were taught two components of Japanese—novel phonemic vowel length contrasts and vocabulary items comprised of those contrasts—in one of four different gesture conditions: Syllable Observe, Syllable Produce, Mora Observe, and Mora Produce. Half of the gestures conveyed intuitive information about syllable structure, and the other half, unintuitive information about Japanese mora structure. Within each Syllable and Mora condition, half of the participants only observed the gestures that accompanied speech during training, and the other half also produced the gestures that they observed along with the speech. The main finding was that participants across all four conditions had similar outcomes in two different types of auditory identification tasks and a vocabulary test. The results suggest that hand gestures may not be well suited for learning novel phonetic distinctions at the syllable level within a word, and thus, gesture-speech integration may break down at the lowest levels of language processing and learning. PMID:25071646

  20. Hand gesture recognition in confined spaces with partial observability and occultation constraints

    Science.gov (United States)

    Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.

  1. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were...

  2. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.

    Science.gov (United States)

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2017-04-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.

  3. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A word in the hand: action, gesture and mental representation in humans and non-human primates

    Science.gov (United States)

    Cartmill, Erica A.; Beilock, Sian; Goldin-Meadow, Susan

    2012-01-01

    The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates' lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements. PMID:22106432

  5. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures

    Science.gov (United States)

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2018-01-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151

  6. Support vector machine and mel frequency Cepstral coefficient based algorithm for hand gestures and bidirectional speech to text device

    Science.gov (United States)

    Balbin, Jessie R.; Padilla, Dionis A.; Fausto, Janette C.; Vergara, Ernesto M.; Garcia, Ramon G.; Delos Angeles, Bethsedea Joy S.; Dizon, Neil John A.; Mardo, Mark Kevin N.

    2017-02-01

    This research is about translating series of hand gesture to form a word and produce its equivalent sound on how it is read and said in Filipino accent using Support Vector Machine and Mel Frequency Cepstral Coefficient analysis. The concept is to detect Filipino speech input and translate the spoken words to their text form in Filipino. This study is trying to help the Filipino deaf community to impart their thoughts through the use of hand gestures and be able to communicate to people who do not know how to read hand gestures. This also helps literate deaf to simply read the spoken words relayed to them using the Filipino speech to text system.

  7. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    Science.gov (United States)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  8. Finger Angle-Based Hand Gesture Recognition for Smart Infrastructure Using Wearable Wrist-Worn Camera

    Directory of Open Access Journals (Sweden)

    Feiyu Chen

    2018-03-01

    Full Text Available The arising of domestic robots in smart infrastructure has raised demands for intuitive and natural interaction between humans and robots. To address this problem, a wearable wrist-worn camera (WwwCam is proposed in this paper. With the capability of recognizing human hand gestures in real-time, it enables services such as controlling mopping robots, mobile manipulators, or appliances in smart-home scenarios. The recognition is based on finger segmentation and template matching. Distance transformation algorithm is adopted and adapted to robustly segment fingers from the hand. Based on fingers’ angles relative to the wrist, a finger angle prediction algorithm and a template matching metric are proposed. All possible gesture types of the captured image are first predicted, and then evaluated and compared to the template image to achieve the classification. Unlike other template matching methods relying highly on large training set, this scheme possesses high flexibility since it requires only one image as the template, and can classify gestures formed by different combinations of fingers. In the experiment, it successfully recognized ten finger gestures from number zero to nine defined by American Sign Language with an accuracy up to 99.38%. Its performance was further demonstrated by manipulating a robot arm using the implemented algorithms and WwwCam to transport and pile up wooden building blocks.

  9. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  10. Gestures Specialized for Dialogue.

    Science.gov (United States)

    Bavelas, Janet Beavin; And Others

    1995-01-01

    Explored how hand gestures help interlocutors coordinate their dialogue. Analysis of dyadic conversations and monologues revealed that requirements of dialogue uniquely affect interactive gestures. Gestures aided the speaker's efforts to include the addressee in the conversation. Gestures also demonstrated the importance of social processes in…

  11. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  12. Gestures maintain spatial imagery.

    Science.gov (United States)

    Wesp, R; Hesse, J; Keutmann, D; Wheaton, K

    2001-01-01

    Recent theories suggest alternatives to the commonly held belief that the sole role of gestures is to communicate meaning directly to listeners. Evidence suggests that gestures may serve a cognitive function for speakers, possibly acting as lexical primes. We observed that participants gestured more often when describing a picture from memory than when the picture was present and that gestures were not influenced by manipulating eye contact of a listener. We argue that spatial imagery serves a short-term memory function during lexical search and that gestures may help maintain spatial images. When spatial imagery is not necessary, as in conditions of direct visual stimulation, reliance on gestures is reduced or eliminated.

  13. Gestural interaction in a virtual environment

    Science.gov (United States)

    Jacoby, Richard H.; Ferneau, Mark; Humphries, Jim

    1994-04-01

    This paper discusses the use of hand gestures (i.e., changing finger flexion) within a virtual environment (VE). Many systems now employ static hand postures (i.e., static finger flexion), often coupled with hand translations and rotations, as a method of interacting with a VE. However, few systems are currently using dynamically changing finger flexion for interacting with VEs. In our system, the user wears an electronically instrumented glove. We have developed a simple algorithm for recognizing gestures for use in two applications: automotive design and visualization of atmospheric data. In addition to recognizing the gestures, we also calculate the rate at which the gestures are made and the rate and direction of hand movement while making the gestures. We report on our experiences with the algorithm design and implementation, and the use of the gestures in our applications. We also talk about our background work in user calibration of the glove, as well as learned and innate posture recognition (postures recognized with and without training, respectively).

  14. In Our Hands: An Ethics of Gestural Response-ability. Rebecca Schneider in conversation with Lucia Ruprecht

    Directory of Open Access Journals (Sweden)

    Rebecca Schneider

    2017-06-01

    Full Text Available The following conversation aims to trace the role of gesture and gestural thinking in Rebecca Schneider’s work, and to tease out the specific gestural ethics which arises in her writings. In particular, Schneider thinks about the politics of citation and reiteration for an ethics of call and response that emerges in the gesture of the hail. Both predicated upon a fundamentally ethical relationality and susceptible to ideological investment, the hail epitomises the operations of the “both/and”—a logic of conjunction that structures and punctuates the history of thinking on gesture from the classic Brechtian tactic in which performance both replays and counters conditions of subjugation to Alexander Weheliye’s reclamation of this tactic for black and critical ethnic studies. The gesture of the hail will lead us, then, to the gesture of protest in the Black Lives Matter movement. The hands that are held up in the air both replay (and respond to the standard pose of surrender in the face of police authority and call for a future that might be different. Schneider’s ethics of response-ability thus rethinks relationality as something that always already anticipates and perpetually reinaugurates possibilities for response.

  15. The development of gesture

    OpenAIRE

    Tellier, Marion

    2009-01-01

    Human beings gesture everyday while speaking: they move their hands, their heads, their arms; their whole body is involved in communication. But how does it work? How do we produce gestures and in what purpose? How are gestures connected to speech? When do we begin producing gestures and how do they evolve throughout the life span? These are questions gesture researchers have been trying to answer since the second half of the 20th century. This chapter will first define what a gesture is by d...

  16. Effect of Dialogue on Demonstrations: Direct Quotations, Facial Portrayals, Hand Gestures, and Figurative References

    Science.gov (United States)

    Bavelas, Janet; Gerwing, Jennifer; Healing, Sara

    2014-01-01

    "Demonstrations" (e.g., direct quotations, conversational facial portrayals, conversational hand gestures, and figurative references) lack conventional meanings, relying instead on a resemblance to their referent. Two experiments tested our theory that demonstrations are a class of communicative acts that speakers are more likely to use…

  17. The comprehension of gesture and speech

    NARCIS (Netherlands)

    Willems, R.M.; Özyürek, A.; Hagoort, P.

    2005-01-01

    Although generally studied in isolation, action observation and speech comprehension go hand in hand during everyday human communication. That is, people gesture while they speak. From previous research it is known that a tight link exists between spoken language and such hand gestures. This study

  18. Gestures in an Intelligent User Interface

    Science.gov (United States)

    Fikkert, Wim; van der Vet, Paul; Nijholt, Anton

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.

  19. Gesturing Gives Children New Ideas About Math

    Science.gov (United States)

    Goldin-Meadow, Susan; Cook, Susan Wagner; Mitchell, Zachary A.

    2009-01-01

    How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands. PMID:19222810

  20. The sound of one-hand clapping: handedness and perisylvian neural correlates of a communicative gesture in chimpanzees.

    Science.gov (United States)

    Meguerditchian, Adrien; Gardner, Molly J; Schapiro, Steven J; Hopkins, William D

    2012-05-22

    Whether lateralization of communicative signalling in non-human primates might constitute prerequisites of hemispheric specialization for language is unclear. In the present study, we examined (i) hand preference for a communicative gesture (clapping in 94 captive chimpanzees from two research facilities) and (ii) the in vivo magnetic resonance imaging brain scans of 40 of these individuals. The preferred hand for clapping was defined as the one in the upper position when the two hands came together. Using computer manual tracing of regions of interest, we measured the neuroanatomical asymmetries for the homologues of key language areas, including the inferior frontal gyrus (IFG) and planum temporale (PT). When considering the entire sample, there was a predominance of right-handedness for clapping and the distribution of right- and left-handed individuals did not differ between the two facilities. The direction of hand preference (right- versus left-handed subjects) for clapping explained a significant portion of variability in asymmetries of the PT and IFG. The results are consistent with the view that gestural communication in the common ancestor may have been a precursor of language and its cerebral substrates in modern humans.

  1. The effects of hand gestures on verbal recall as a function of high- and low-verbal-skill levels.

    Science.gov (United States)

    Frick-Horbury, Donna

    2002-04-01

    The author examined the effects of cueing for verbal recall with the accompanying self-generated hand gestures as a function of verbal skill. There were 36 participants, half with low SAT verbal scores and half with high SAT verbal scores. Half of the participants of each verbal-skill level were cued for recall with their own gestures, and the remaining half was given a free-recall test. Cueing with self-generated gestures aided the low-verbal-skill participants so that their retrieval rate equaled that of the high-verbal-skill participants and their loss of recall over a 2-week period was minimal. This effect was stable for both concrete and abstract words. The findings support the hypothesis that gestures serve as an auxiliary code for memory retrieval.

  2. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension.

    Science.gov (United States)

    Klooster, Nathaniel B; Cook, Susan W; Uc, Ergun Y; Duff, Melissa C

    2014-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  3. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  4. The Automatic Annotation of the Semiotic Type of Hand Gestures in Obama’s Humorous Speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2018-01-01

    is expressed by speech or by adding new information to what is uttered. The automatic classification of the semiotic type of gestures from their shape description can contribute to their interpretation in human-human communication and in advanced multimodal interactive systems. We annotated and analysed hand...

  5. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    Directory of Open Access Journals (Sweden)

    Nathaniel Bloem Klooster

    2015-01-01

    Full Text Available Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture’s ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson’s disease, and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  6. An in-situ trainable gesture classifier

    NARCIS (Netherlands)

    van Diepen, A.; Cox, M.G.H.; de Vries, A.; Duivesteijn, W.; Pechenizkiy, M.; Fletcher, G.H.L.

    2017-01-01

    Gesture recognition, i.e., the recognition of pre-defined gestures by arm or hand movements, enables a natural extension of the way we currently interact with devices (Horsley, 2016). Commercially available gesture recognition systems are usually pre-trained: the developers specify a set of

  7. An Ubiquitous and Non Intrusive System for Pervasive Advertising using NFC and Geolocation Technologies and Air Hand Gestures

    Directory of Open Access Journals (Sweden)

    Francisco M. Borrego-Jaraba

    2014-01-01

    Full Text Available In this paper we present a pervasive proposal for advertising using mobile phones, Near Field Communication, geolocation and air hand gestures. Advertising post built by users in public/private spaces can store multiple ads containing any kind of textual, graphic or multimedia information. Ads are automatically shows in the mobile phone of the users using a notification based process considering relative user location between the posts and the user preferences. Moreover, ads can be stored and retrieved from the post using hand gestures and Near Field Communication technology. Secure management of information about users, posts, and notifications and the use of instant messaging enable the development of systems to extend the current advertising strategies based on Web, large displays or digital signage.

  8. How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis.

    Science.gov (United States)

    Kita, Sotaro; Alibali, Martha W; Chu, Mingyuan

    2017-04-01

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture's ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Gesture Commanding of a Robot with EVA Gloves

    Data.gov (United States)

    National Aeronautics and Space Administration — Gestures commands allow a human operator to directly interact with a robot without the use of intermediary hand controllers. There are two main types of hand gesture...

  10. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    OpenAIRE

    Klooster, Nathaniel B.; Cook, Susan W.; Uc, Ergun Y.; Duff, Melissa C.

    2015-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that proc...

  11. Head and eye movement as pointing modalities for eyewear computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbeigi, Diako; Pederson, Thomas

    2014-01-01

    examined using head and eye movements to point on a graphical user interface of a wearable computer. The performance of users in head and eye pointing has been compared with mouse pointing as a baseline method. The result of our experiment showed that the eye pointing is significantly faster than head......While the new generation of eyewear computers have increased expectations of a wearable computer, providing input to these devices is still challenging. Hand-held devices, voice commands, and hand gestures have already been explored to provide input to the wearable devices. In this paper, we...

  12. Barack Obama’s pauses and gestures in humorous speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    The main aim of this paper is to investigate speech pauses and gestures as means to engage the audience and present the humorous message in an effective way. The data consist of two speeches by the USA president Barack Obama at the 2011 and 2016 Annual White House Correspondents’ Association Dinner...... produced significantly more hand gestures in 2016 than in 2011. An analysis of the hand gestures produced by Barack Obama in two political speeches held at the United Nations in 2011 and 2016 confirms that the president produced significantly less communicative co-speech hand gestures during his speeches...... and they emphasise the speech segment which they follow or precede. We also found a highly significant correlation between Obama’s speech pauses and audience response. Obama produces numerous head movements, facial expressions and hand gestures and their functions are related to both discourse content and structure...

  13. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  14. GESTURE'S ROLE IN CREATING AND LEARNING LANGUAGE.

    Science.gov (United States)

    Goldin-Meadow, Susan

    2010-09-22

    Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.

  15. Computer-Assisted Culture Learning in an Online Augmented Reality Environment Based on Free-Hand Gesture Interaction

    Science.gov (United States)

    Yang, Mau-Tsuen; Liao, Wan-Che

    2014-01-01

    The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…

  16. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  17. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  18. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    OpenAIRE

    Nathaniel Bloem Klooster; Nathaniel Bloem Klooster; Susan Wagner Cook; Susan Wagner Cook; Ergun Y. Uc; Ergun Y. Uc; Melissa C. Duff; Melissa C. Duff; Melissa C. Duff; Melissa C. Duff

    2015-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture’s ability to drive new learning is supported by procedural memory and that proc...

  19. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Regina Lionnie

    2013-09-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing  methods  used  in  a  hand  gesture  recognition  system.  The  preprocessing methods are based on the combinations ofseveral image processing operations,  namely  edge  detection,  low  pass  filtering,  histogram  equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possibleclasses. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  20. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  1. Real-time face and gesture analysis for human-robot interaction

    Science.gov (United States)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  2. Eye-hand coupling during closed-loop drawing: evidence of shared motor planning?

    Science.gov (United States)

    Reina, G Anthony; Schwartz, Andrew B

    2003-04-01

    Previous paradigms have used reaching movements to study coupling of eye-hand kinematics. In the present study, we investigated eye-hand kinematics as curved trajectories were drawn at normal speeds. Eye and hand movements were tracked as a monkey traced ellipses and circles with the hand in free space while viewing the hand's position on a computer monitor. The results demonstrate that the movement of the hand was smooth and obeyed the 2/3 power law. Eye position, however, was restricted to 2-3 clusters along the hand's trajectory and fixed approximately 80% of the time in one of these clusters. The eye remained stationary as the hand moved away from the fixation for up to 200 ms and saccaded ahead of the hand position to the next fixation along the trajectory. The movement from one fixation cluster to another consistently occurred just after the tangential hand velocity had reached a local minimum, but before the next segment of the hand's trajectory began. The next fixation point was close to an area of high curvature along the hand's trajectory even though the hand had not reached that point along the path. A visuo-motor illusion of hand movement demonstrated that the eye movement was influenced by hand movement and not simply by visual input. During the task, neural activity of pre-motor cortex (area F4) was recorded using extracellular electrodes and used to construct a population vector of the hand's trajectory. The results suggest that the saccade onset is correlated in time with maximum curvature in the population vector trajectory for the hand movement. We hypothesize that eye and arm movements may have common, or shared, information in forming their motor plans.

  3. Tactile Feedback for Above-Device Gesture Interfaces

    OpenAIRE

    Freeman, Euan; Brewster, Stephen; Lantz, Vuokko

    2014-01-01

    Above-device gesture interfaces let people interact in the space above mobile devices using hand and finger movements. For example, users could gesture over a mobile phone or wearable without having to use the touchscreen. We look at how above-device interfaces can also give feedback in the space over the device. Recent haptic and wearable technologies give new ways to provide tactile feedback while gesturing, letting touchless gesture interfaces give touch feedback. In this paper we take a f...

  4. Gesturing Makes Memories that Last

    Science.gov (United States)

    Cook, Susan Wagner; Yip, Terina KuangYi; Goldin-Meadow, Susan

    2010-01-01

    When people are asked to perform actions, they remember those actions better than if they are asked to talk about the same actions. But when people talk, they often gesture with their hands, thus adding an action component to talking. The question we asked in this study was whether producing gesture along with speech makes the information encoded…

  5. Using the Hands to Represent Objects in Space: Gesture as a Substrate for Signed Language Acquisition.

    Science.gov (United States)

    Janke, Vikki; Marshall, Chloë R

    2017-01-01

    An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the

  6. Seeing Iconic Gestures While Encoding Events Facilitates Children's Memory of These Events.

    Science.gov (United States)

    Aussems, Suzanne; Kita, Sotaro

    2017-11-08

    An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control conditions. Iconic gestures were categorized based on how much of the actors was represented by the hands (feet, legs, or body). Only iconic hand-as-body gestures boosted actor memory. Thus, seeing iconic gestures while encoding events facilitates children's memory of those aspects of events that are schematically highlighted by gesture. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  7. Gesture Imitation in Schizophrenia

    Science.gov (United States)

    Matthews, Natasha; Gold, Brian J.; Sekuler, Robert; Park, Sohee

    2013-01-01

    Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning. PMID:21765171

  8. Gesture imitation in schizophrenia.

    Science.gov (United States)

    Matthews, Natasha; Gold, Brian J; Sekuler, Robert; Park, Sohee

    2013-01-01

    Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning.

  9. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  10. An Infant Development-inspired Approach to Robot Hand-eye Coordination

    Directory of Open Access Journals (Sweden)

    Fei Chao

    2014-02-01

    Full Text Available This paper presents a novel developmental learning approach for hand-eye coordination in an autonomous robotic system. Robotic hand-eye coordination plays an important role in dealing with real-time environments. Under the approach, infant developmental patterns are introduced to build our robot's learning system. The method works by first constructing a brain-like computational structure to control the robot, and then by using infant behavioural patterns to build a hand-eye coordination learning algorithm. This work is supported by an experimental evaluation, which shows that the control system is implemented simply, and that the learning approach provides fast and incremental learning of behavioural competence.

  11. Patients with hippocampal amnesia successfully integrate gesture and speech.

    Science.gov (United States)

    Hilverman, Caitlin; Clough, Sharice; Duff, Melissa C; Cook, Susan Wagner

    2018-06-19

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus - known for its role in relational memory and information integration - is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms. Copyright © 2018. Published by Elsevier Ltd.

  12. Unilateral Keratoconus after Chronic Eye Rubbing by the Nondominant Hand

    Directory of Open Access Journals (Sweden)

    Nathalie Bral

    2017-12-01

    Full Text Available Introduction: To report the development of unilateral keratoconus in a healthy male after persistent unilateral eye rubbing by the nondominant hand which was not needed for professional activities. Methods: Observational case report. Results: A 60-year-old male was first seen in our clinic due to decreased vision in his left eye. Slit-lamp biomicroscopy of the left eye revealed Vogt’s striae, stromal thinning, and a stromal scar. Corneal topography showed a stage 4 keratoconus. Clinical examination and corneal topography of the right eye were normal. Medical history revealed a habit of chronic eye rubbing only in the left eye because of the right hand being occupied for professional needs. During follow-up of 5 years, Scheimpflug images of the right eye stayed normal while the left eye showed a stable cone. Discussion: This case report supports the hypothesis of mechanical fatigue of the cornea due to repetitive shear stress on the surface caused by eye-rubbing.

  13. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving

    OpenAIRE

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the...

  14. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    Science.gov (United States)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  15. Speech-associated gestures, Broca’s area, and the human mirror system

    Science.gov (United States)

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  16. Eye-hand laterality and right thoracic idiopathic scoliosis.

    Science.gov (United States)

    Catanzariti, Jean-François; Guyot, Marc-Alexandre; Agnani, Olivier; Demaille, Samantha; Kolanowski, Elisabeth; Donze, Cécile

    2014-06-01

    The adolescent idiopathic scoliosis (AIS) pathogenesis remains unknown. Certain studies have shown that there is a correlation between manual laterality and scoliotic deviation. A full study of manual laterality needs to be paired with one for visual dominance. With the aim of physiopathological research, we have evaluated the manual and visual laterality in AIS. A retrospective study from prospective data collection is used to evaluate the distribution of eye-hand laterality (homogeneous or crossed) of 65 right thoracic AIS (mean age 14.8 ± 1.8 years; mean Cobb angle: 32.8°) and a control group of 65 sex and age-matched (mean age 14.6 ± 1.8 years). The manual laterality was defined by the modified Edinburgh Handedness Inventory. The evaluation of the visual laterality is done using three tests (kaleidoscope test, hole-in-the-card test, distance-hole-in-the-card test). The group of right thoracic AIS presents a significantly higher frequency of crossed eye-hand laterality (63 %) than the control group (63 vs. 29.2 %; p laterality is "right hand dominant-left eye dominant" (82.9 %). There is no relationship with the Cobb angle. Those with right thoracic AIS show a higher occurrence of crossed eye-hand laterality. This could point physiopathological research of AIS towards functional abnormality of the optic chiasma through underuse of cross visual pathways, and in particular accessory optic pathways. It would be useful to explore this by carrying out research on AISs through neuroimaging and neurofunctional exploration.

  17. Exploring the Use of Discrete Gestures for Authentication

    Science.gov (United States)

    Chong, Ming Ki; Marsden, Gary

    Research in user authentication has been a growing field in HCI. Previous studies have shown that peoples’ graphical memory can be used to increase password memorability. On the other hand, with the increasing number of devices with built-in motion sensors, kinesthetic memory (or muscle memory) can also be exploited for authentication. This paper presents a novel knowledge-based authentication scheme, called gesture password, which uses discrete gestures as password elements. The research presents a study of multiple password retention using PINs and gesture passwords. The study reports that although participants could use kinesthetic memory to remember gesture passwords, retention of PINs is far superior to retention of gesture passwords.

  18. Gestural Interaction for Virtual Reality Environments through Data Gloves

    Directory of Open Access Journals (Sweden)

    G. Rodriguez

    2017-05-01

    Full Text Available In virtual environments, virtual hand interactions play a key role in interactivity and realism allowing to perform fine motions. Data glove is widely used in Virtual Reality (VR and through simulating a human hands natural anatomy (Avatar’s hands in its appearance and motion is possible to interact with the environment and virtual objects. Recently, hand gestures are considered as one of the most meaningful and expressive signals. As consequence, this paper explores the use of hand gestures as a mean of Human-Computer Interaction (HCI for VR applications through data gloves. Using a hand gesture recognition and tracking method, accurate and real-time interactive performance can be obtained. To verify the effectiveness and usability of the system, an experiment of ease learning based on execution’s time was performed. The experimental results demonstrate that this interaction’s approach does not present problems for people more experienced in the use of computer applications. While people with basic knowledge has some problems the system becomes easy to use with practice.

  19. Waving real hand gestures recorded by wearable motion sensors to a virtual car and driver in a mixed-reality parking game

    NARCIS (Netherlands)

    Bannach, D.; Amft, O.D.; Kunze, K.S.; Heinz, E.A.; Tröster, G.; Lukowicz, P.

    2007-01-01

    We envision to add context awareness and ambient intelligence to edutainment and computer gaming applications in general. This requires mixed-reality setups and ever-higher levels of immersive human-computer interaction. Here, we focus on the automatic recognition of natural human hand gestures

  20. Hand-eye coordinative remote maintenance in a tokamak vessel

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, Qiang, E-mail: qiu6401@sjtu.edu.cn; Gu, Kai, E-mail: gukai0707@sjtu.edu.cn; Wang, Pengfei, E-mail: wpf790714@163.com; Bai, Weibang, E-mail: 654253204@qq.com; Cao, Qixin, E-mail: qxcao@sjtu.edu.cn

    2016-03-15

    Highlights: • If there is not rotation between the visual coordinate frame (O{sub e}X{sub e}Y{sub e}) and hand coordinate frame (O{sub h}X{sub h}Y{sub h}), a person can coordinate the movement between hand and eye easily. • We establish an alignment between the movement of the operator's hand and the visual scene of the end-effector as displayed on the monitor. • A potential function is set up in a simplified vacuum vessel model to provide a fast collision checking, and the alignment between repulsive force and Omega 7 feedback force is accomplished. • We carry out an experiment to evaluate its performance in a remote handling task. - Abstract: The reliability is vitally important for the remote maintenance in a tokamak vessel. In order to establish a more accurate and safer remote handling system, a hand-eye coordination method and an artificial potential function based collision avoidance method were proposed in this paper. At the end of this paper, these methods were implemented to a bolts tightening maintenance task, which was carried out in our 1/10 scale tokamak model. Experiment results have verified the value of the hand-eye coordination method and the collision avoidance method.

  1. Hand-eye coordinative remote maintenance in a tokamak vessel

    International Nuclear Information System (INIS)

    Qiu, Qiang; Gu, Kai; Wang, Pengfei; Bai, Weibang; Cao, Qixin

    2016-01-01

    Highlights: • If there is not rotation between the visual coordinate frame (O_eX_eY_e) and hand coordinate frame (O_hX_hY_h), a person can coordinate the movement between hand and eye easily. • We establish an alignment between the movement of the operator's hand and the visual scene of the end-effector as displayed on the monitor. • A potential function is set up in a simplified vacuum vessel model to provide a fast collision checking, and the alignment between repulsive force and Omega 7 feedback force is accomplished. • We carry out an experiment to evaluate its performance in a remote handling task. - Abstract: The reliability is vitally important for the remote maintenance in a tokamak vessel. In order to establish a more accurate and safer remote handling system, a hand-eye coordination method and an artificial potential function based collision avoidance method were proposed in this paper. At the end of this paper, these methods were implemented to a bolts tightening maintenance task, which was carried out in our 1/10 scale tokamak model. Experiment results have verified the value of the hand-eye coordination method and the collision avoidance method.

  2. Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2018-01-01

    Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.

  3. The Use of Gestural Modes to Enhance Expressive Conducting at All Levels of Entering Behavior through the Use of Illustrators, Affect Displays and Regulators

    Science.gov (United States)

    Mathers, Andrew

    2009-01-01

    In this article, I discuss the use of illustrators, affect displays and regulators, which I consider to be non-verbal communication categories through which conductors can employ a more varied approach to body use, gesture and non-verbal communication. These categories employ the use of a conductor's hands and arms, face, eyes and body in a way…

  4. The gesture in Physical Culture career teaching

    Directory of Open Access Journals (Sweden)

    Alina Bestard-Revilla

    2015-04-01

    Full Text Available The research is in charge of gesture interpretation of Physical Culture Career's teacherr with the objective of revealing the senses that underlie in the pedagogic al interaction between the teacher and the students. It also tends to the analysis and understanding of the teacher's gestures during their pedagogic al interactions. The research answers the following question s: How to take advantage s from the Physical Culture university teachers for a greater quality of his lessons ?, and it precisely looks for the gesture inter pretation, analyzes what underlies in a gesture in a teaching learning space; reveals the meanings contained in a glance, the hands signalizations, the corporal postures, the approaches, the smiles, among other important expressions in the teachers communi cative situations in correspondence with the students gestures.

  5. Beat gestures and prosodic prominence: impact on learning

    OpenAIRE

    Kushch, Olga

    2018-01-01

    Previous research has shown that gestures are beneficial for language learning. This doctoral thesis centers on the effects of beat gestures– i.e., hand and arm gestures that are typically associated with prosodically prominent positions in speech - on such processes. Little is known about how the two central properties of beat gestures, namely how they mark both information focus and rhythmic positions in speech, can be beneficial for learning either a first or a second language. The main go...

  6. Research on direct calibration method of eye-to-hand system of robot

    Science.gov (United States)

    Hu, Xiaoping; Xie, Ke; Peng, Tao

    2013-10-01

    In the position-based visual servoing control for robot, the hand-eye calibration is very important because it can affect the control precision of the system. According to the robot with eye-to-hand stereovision system, this paper proposes a direct method of hand-eye calibration. The method utilizes the triangle measuring principle to solve the coordinates in the camera coordinate system of scene point. It calculates the estimated coordinates by the hand-eye calibration equation set which indicates the transformational relation from the robot to the camera coordinate system, and then uses the error of actual and estimated coordinates to establish the objective function. Finally the method substitutes the parameters into the function repeatedly until it converged to optimize the result. The related experiment compared the measured coordinates with the actual coordinates, shows the efficiency and the precision of it.

  7. The role of gestures in spatial working memory and speech.

    Science.gov (United States)

    Morsella, Ezequiel; Krauss, Robert M

    2004-01-01

    Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.

  8. A cross-species study of gesture and its role in symbolic development: Implications for the gestural theory of language evolution

    Directory of Open Access Journals (Sweden)

    Kristen eGillespie-Lynch

    2013-06-01

    Full Text Available Using a naturalistic video database, we examined whether gestures scaffolded the symbolic development of a language-enculturated chimpanzee, a language-enculturated bonobo, and a human child during the second year of life. These three species constitute a complete clade: species possessing a common immediate ancestor. A basic finding was the functional and formal similarity of many gestures between chimpanzee, bonobo, and human child. The child’s symbols were spoken words; the apes’ symbols were lexigrams, noniconic visual signifiers. A developmental pattern in which gestural representation of a referent preceded symbolic representation of the same referent appeared in all three species (but was statistically significant only for the child. Nonetheless, across species, the ratio of symbol to gesture increased significantly with age. But even though their symbol production increased, the apes continued to communicate more frequently by gesture than by symbol. In contrast, by15-18 months of age, the child used symbols more frequently than gestures. This ontogenetic sequence from gesture to symbol, present across the clade but more pronounced in child than ape, provides support for the role of gesture in language evolution. In all three species, the overwhelming majority of gestures were communicative (paired with eye-contact, vocalization, and/or persistence. However, vocalization was rare for the apes, but accompanied the majority of the child’s communicative gestures. This finding suggests the co-evolution of speech and gesture after the evolutionary divergence of the hominid line. Multimodal expressions of communicative intent (e.g., vocalization plus persistence were normative for the child, but less common for the apes. This finding suggests that multimodal expression of communicative intent was also strengthened after hominids diverged from apes.

  9. Comprehension of iconic gestures by chimpanzees and human children.

    Science.gov (United States)

    Bohn, Manuel; Call, Josep; Tomasello, Michael

    2016-02-01

    Iconic gestures-communicative acts using hand or body movements that resemble their referent-figure prominently in theories of language evolution and development. This study contrasted the abilities of chimpanzees (N=11) and 4-year-old human children (N=24) to comprehend novel iconic gestures. Participants learned to retrieve rewards from apparatuses in two distinct locations, each requiring a different action. In the test, a human adult informed the participant where to go by miming the action needed to obtain the reward. Children used the iconic gestures (more than arbitrary gestures) to locate the reward, whereas chimpanzees did not. Some children also used arbitrary gestures in the same way, but only after they had previously shown comprehension for iconic gestures. Over time, chimpanzees learned to associate iconic gestures with the appropriate location faster than arbitrary gestures, suggesting at least some recognition of the iconicity involved. These results demonstrate the importance of iconicity in referential communication. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Eye tracking a self-moved target with complex hand-target dynamics

    Science.gov (United States)

    Landelle, Caroline; Montagnini, Anna; Madelain, Laurent

    2016-01-01

    Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129

  11. Doing science by waving hands: Talk, symbiotic gesture, and interaction with digital content as resources in student inquiry

    Science.gov (United States)

    Gregorcic, Bor; Planinsic, Gorazd; Etkina, Eugenia

    2017-12-01

    In this paper, we investigate some of the ways in which students, when given the opportunity and an appropriate learning environment, spontaneously engage in collaborative inquiry. We studied small groups of high school students interacting around and with an interactive whiteboard equipped with Algodoo software, as they investigated orbital motion. Using multimodal discourse analysis, we found that in their discussions the students relied heavily on nonverbal meaning-making resources, most notably hand gestures and resources in the surrounding environment (items displayed on the interactive whiteboard). They juxtaposed talk with gestures and resources in the environment to communicate ideas that they initially were not able to express using words alone. By spontaneously recruiting and combining a diverse set of meaning-making resources, the students were able to express relatively fluently complex ideas on a novel physics topic, and to engage in practices that resemble a scientific approach to exploration of new phenomena.

  12. Eye-hand exercise: new variant in amblyopia management.

    Science.gov (United States)

    Svĕrák, J; Peregrin, J; Juran, J

    1990-01-01

    A total of 50 children with unilateral amblyopia was treated by short term 10 minute-lasting weekly occlusions of visually well eye. During the occlusion the child is providing the intensive detailed activities under patient's supervision. After an approximately half-a-year lasting interval, the "eye-hand" exercise resulted in the mean improvement of visual acuity for 2.44 normalised lines. The visual motor factor is involved in amblyopia treatment.

  13. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles

    Directory of Open Access Journals (Sweden)

    Dana Hughes

    2017-01-01

    Full Text Available We present an radio-frequency (RF-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.

  14. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    Science.gov (United States)

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  15. Head-mounted eye tracking of a chimpanzee under naturalistic conditions.

    Directory of Open Access Journals (Sweden)

    Fumihiro Kano

    Full Text Available This study offers a new method for examining the bodily, manual, and eye movements of a chimpanzee at the micro-level. A female chimpanzee wore a lightweight head-mounted eye tracker (60 Hz on her head while engaging in daily interactions with the human experimenter. The eye tracker recorded her eye movements accurately while the chimpanzee freely moved her head, hands, and body. Three video cameras recorded the bodily and manual movements of the chimpanzee from multiple angles. We examined how the chimpanzee viewed the experimenter in this interactive setting and how the eye movements were related to the ongoing interactive contexts and actions. We prepared two experimentally defined contexts in each session: a face-to-face greeting phase upon the appearance of the experimenter in the experimental room, and a subsequent face-to-face task phase that included manual gestures and fruit rewards. Overall, the general viewing pattern of the chimpanzee, measured in terms of duration of individual fixations, length of individual saccades, and total viewing duration of the experimenter's face/body, was very similar to that observed in previous eye-tracking studies that used non-interactive situations, despite the differences in the experimental settings. However, the chimpanzee viewed the experimenter and the scene objects differently depending on the ongoing context and actions. The chimpanzee viewed the experimenter's face and body during the greeting phase, but viewed the experimenter's face and hands as well as the fruit reward during the task phase. These differences can be explained by the differential bodily/manual actions produced by the chimpanzee and the experimenter during each experimental phase (i.e., greeting gestures, task cueing. Additionally, the chimpanzee's viewing pattern varied depending on the identity of the experimenter (i.e., the chimpanzee's prior experience with the experimenter. These methods and results offer new

  16. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    Science.gov (United States)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  17. From mouth to hand: gesture, speech, and the evolution of right-handedness.

    Science.gov (United States)

    Corballis, Michael C

    2003-04-01

    The strong predominance of right-handedness appears to be a uniquely human characteristic, whereas the left-cerebral dominance for vocalization occurs in many species, including frogs, birds, and mammals. Right-handedness may have arisen because of an association between manual gestures and vocalization in the evolution of language. I argue that language evolved from manual gestures, gradually incorporating vocal elements. The transition may be traced through changes in the function of Broca's area. Its homologue in monkeys has nothing to do with vocal control, but contains the so-called "mirror neurons," the code for both the production of manual reaching movements and the perception of the same movements performed by others. This system is bilateral in monkeys, but predominantly left-hemispheric in humans, and in humans is involved with vocalization as well as manual actions. There is evidence that Broca's area is enlarged on the left side in Homo habilis, suggesting that a link between gesture and vocalization may go back at least two million years, although other evidence suggests that speech may not have become fully autonomous until Homo sapiens appeared some 170,000 years ago, or perhaps even later. The removal of manual gesture as a necessary component of language may explain the rapid advance of technology, allowing late migrations of Homo sapiens from Africa to replace all other hominids in other parts of the world, including the Neanderthals in Europe and Homo erectus in Asia. Nevertheless, the long association of vocalization with manual gesture left us a legacy of right-handedness.

  18. RisQ: Recognizing Smoking Gestures with Inertial Sensors on a Wristband

    Science.gov (United States)

    Parate, Abhinav; Chiu, Meng-Chieh; Chadowitz, Chaniel; Ganesan, Deepak; Kalogerakis, Evangelos

    2015-01-01

    Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are fourfold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods. PMID:26688835

  19. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  20. Smart Remote for the Setup Box Using Gesture Control

    OpenAIRE

    Surepally Uday Kumar; K. Shamini

    2016-01-01

    The basic purpose of this project is to provide a means to control a set top box (capable of infrared communication), in this case Hathway using hand gestures. Thus, this system will act like a remote control for operating set top box, but this will be achieved through hand gestures instead of pushing buttons. To send and receive remote control signals, this project uses an infrared LED as Transmitter. Using an infrared receiver, an Arduino can detect the bits being sent by a remo...

  1. Coronary Heart Disease Preoperative Gesture Interactive Diagnostic System Based on Augmented Reality.

    Science.gov (United States)

    Zou, Yi-Bo; Chen, Yi-Min; Gao, Ming-Ke; Liu, Quan; Jiang, Si-Yu; Lu, Jia-Hui; Huang, Chen; Li, Ze-Yu; Zhang, Dian-Hua

    2017-08-01

    Coronary heart disease preoperative diagnosis plays an important role in the treatment of vascular interventional surgery. Actually, most doctors are used to diagnosing the position of the vascular stenosis and then empirically estimating vascular stenosis by selective coronary angiography images instead of using mouse, keyboard and computer during preoperative diagnosis. The invasive diagnostic modality is short of intuitive and natural interaction and the results are not accurate enough. Aiming at above problems, the coronary heart disease preoperative gesture interactive diagnostic system based on Augmented Reality is proposed. The system uses Leap Motion Controller to capture hand gesture video sequences and extract the features which that are the position and orientation vector of the gesture motion trajectory and the change of the hand shape. The training planet is determined by K-means algorithm and then the effect of gesture training is improved by multi-features and multi-observation sequences for gesture training. The reusability of gesture is improved by establishing the state transition model. The algorithm efficiency is improved by gesture prejudgment which is used by threshold discriminating before recognition. The integrity of the trajectory is preserved and the gesture motion space is extended by employing space rotation transformation of gesture manipulation plane. Ultimately, the gesture recognition based on SRT-HMM is realized. The diagnosis and measurement of the vascular stenosis are intuitively and naturally realized by operating and measuring the coronary artery model with augmented reality and gesture interaction techniques. All of the gesture recognition experiments show the distinguish ability and generalization ability of the algorithm and gesture interaction experiments prove the availability and reliability of the system.

  2. Angle-of-arrival-based gesture recognition using ultrasonic multi-frequency signals

    KAUST Repository

    Chen, Hui

    2017-11-02

    Hand gestures are tools for conveying information, expressing emotion, interacting with electronic devices or even serving disabled people as a second language. A gesture can be recognized by capturing the movement of the hand, in real time, and classifying the collected data. Several commercial products such as Microsoft Kinect, Leap Motion Sensor, Synertial Gloves and HTC Vive have been released and new solutions have been proposed by researchers to handle this task. These systems are mainly based on optical measurements, inertial measurements, ultrasound signals and radio signals. This paper proposes an ultrasonic-based gesture recognition system using AOA (Angle of Arrival) information of ultrasonic signals emitted from a wearable ultrasound transducer. The 2-D angles of the moving hand are estimated using multi-frequency signals captured by a fixed receiver array. A simple redundant dictionary matching classifier is designed to recognize gestures representing the numbers from `0\\' to `9\\' and compared with a neural network classifier. Average classification accuracies of 95.5% and 94.4% are obtained, respectively, using the two classification methods.

  3. Gesture facilitates the syntactic analysis of speech

    Directory of Open Access Journals (Sweden)

    Henning eHolle

    2012-03-01

    Full Text Available Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are an integrated system. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures influences language comprehension, but not a simple visual movement lacking such an intention.

  4. The cortical signature of impaired gesturing: Findings from schizophrenia

    Directory of Open Access Journals (Sweden)

    Petra Verena Viher

    2018-01-01

    Full Text Available Schizophrenia is characterized by deficits in gesturing that is important for nonverbal communication. Research in healthy participants and brain-damaged patients revealed a left-lateralized fronto-parieto-temporal network underlying gesture performance. First evidence from structural imaging studies in schizophrenia corroborates these results. However, as of yet, it is unclear if cortical thickness abnormalities contribute to impairments in gesture performance. We hypothesized that patients with deficits in gesture production show cortical thinning in 12 regions of interest (ROIs of a gesture network relevant for gesture performance and recognition. Forty patients with schizophrenia and 41 healthy controls performed hand and finger gestures as either imitation or pantomime. Group differences in cortical thickness between patients with deficits, patients without deficits, and controls were explored using a multivariate analysis of covariance. In addition, the relationship between gesture recognition and cortical thickness was investigated. Patients with deficits in gesture production had reduced cortical thickness in eight ROIs, including the pars opercularis of the inferior frontal gyrus, the superior and inferior parietal lobes, and the superior and middle temporal gyri. Gesture recognition correlated with cortical thickness in fewer, but mainly the same, ROIs within the patient sample. In conclusion, our results show that impaired gesture production and recognition in schizophrenia is associated with cortical thinning in distinct areas of the gesture network.

  5. Hand-eye calibration using a target registration error model.

    Science.gov (United States)

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  6. Changes to online control and eye-hand coordination with healthy ageing.

    Science.gov (United States)

    O'Rielly, Jessica L; Ma-Wyatt, Anna

    2018-06-01

    Goal directed movements are typically accompanied by a saccade to the target location. Online control plays an important part in correction of a reach, especially if the target or goal of the reach moves during the reach. While there are notable changes to visual processing and motor control with healthy ageing, there is limited evidence about how eye-hand coordination during online updating changes with healthy ageing. We sought to quantify differences between older and younger people for eye-hand coordination during online updating. Participants completed a double step reaching task implemented under time pressure. The target perturbation could occur 200, 400 and 600 ms into a reach. We measured eye position and hand position throughout the trials to investigate changes to saccade latency, movement latency, movement time, reach characteristics and eye-hand latency and accuracy. Both groups were able to update their reach in response to a target perturbation that occurred at 200 or 400 ms into the reach. All participants demonstrated incomplete online updating for the 600 ms perturbation time. Saccade latencies, measured from the first target presentation, were generally longer for older participants. Older participants had significantly increased movement times but there was no significant difference between groups for touch accuracy. We speculate that the longer movement times enable the use of new visual information about the target location for online updating towards the end of the movement. Interestingly, older participants also produced a greater proportion of secondary saccades within the target perturbation condition and had generally shorter eye-hand latencies. This is perhaps a compensatory mechanism as there was no significant group effect on final saccade accuracy. Overall, the pattern of results suggests that online control of movements may be qualitatively different in older participants. Crown Copyright © 2018. Published by Elsevier B.V. All

  7. Holographic Raman Tweezers Controlled by Hand Gestures and Voice Commands

    Czech Academy of Sciences Publication Activity Database

    Tomori, Z.; Antalík, M.; Kesa, P.; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Zemánek, Pavel

    2013-01-01

    Roč. 3, 2B (2013), s. 331-336 ISSN 2160-8881 Institutional support: RVO:68081731 Keywords : Holographic Optical Tweezers * Raman Tweezers * Natural User Interface * Leap Motion * Gesture Camera Subject RIV: BH - Optics, Masers, Lasers

  8. Hippocampal declarative memory supports gesture production: Evidence from amnesia.

    Science.gov (United States)

    Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C

    2016-12-01

    Spontaneous co-speech hand gestures provide a visuospatial representation of what is being communicated in spoken language. Although it is clear that gestures emerge from representations in memory for what is being communicated (De Ruiter, 1998; Wesp, Hesse, Keutmann, & Wheaton, 2001), the mechanism supporting the relationship between gesture and memory is unknown. Current theories of gesture production posit that action - supported by motor areas of the brain - is key in determining whether gestures are produced. We propose that when and how gestures are produced is determined in part by hippocampally-mediated declarative memory. We examined the speech and gesture of healthy older adults and of memory-impaired patients with hippocampal amnesia during four discourse tasks that required accessing episodes and information from the remote past. Consistent with previous reports of impoverished spoken language in patients with hippocampal amnesia, we predicted that these patients, who have difficulty generating multifaceted declarative memory representations, may in turn have impoverished gesture production. We found that patients gestured less overall relative to healthy comparison participants, and that this was particularly evident in tasks that may rely more heavily on declarative memory. Thus, gestures do not just emerge from the motor representation activated for speaking, but are also sensitive to the representation available in hippocampal declarative memory, suggesting a direct link between memory and gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A common control signal and a ballistic stage can explain the control of coordinated eye-hand movements.

    Science.gov (United States)

    Gopal, Atul; Murthy, Aditya

    2016-06-01

    Voluntary control has been extensively studied in the context of eye and hand movements made in isolation, yet little is known about the nature of control during eye-hand coordination. We probed this with a redirect task. Here subjects had to make reaching/pointing movements accompanied by coordinated eye movements but had to change their plans when the target occasionally changed its position during some trials. Using a race model framework, we found that separate effector-specific mechanisms may be recruited to control eye and hand movements when executed in isolation but when the same effectors are coordinated a unitary mechanism to control coordinated eye-hand movements is employed. Specifically, we found that performance curves were distinct for the eye and hand when these movements were executed in isolation but were comparable when they were executed together. Second, the time to switch motor plans, called the target step reaction time, was different in the eye-alone and hand-alone conditions but was similar in the coordinated condition under assumption of a ballistic stage of ∼40 ms, on average. Interestingly, the existence of this ballistic stage could predict the extent of eye-hand dissociations seen in individual subjects. Finally, when subjects were explicitly instructed to control specifically a single effector (eye or hand), redirecting one effector had a strong effect on the performance of the other effector. Taken together, these results suggest that a common control signal and a ballistic stage are recruited when coordinated eye-hand movement plans require alteration. Copyright © 2016 the American Physiological Society.

  10. The use of hand gestures to communicate about nonpresent objects in mind among children with autism spectrum disorder.

    Science.gov (United States)

    So, Wing-Chee; Lui, Ming; Wong, Tze-Kiu; Sit, Long-Tin

    2015-04-01

    The current study examined whether children with autism spectrum disorder (ASD), in comparison with typically developing children, perceive and produce gestures to identify nonpresent objects (i.e., referent-identifying gestures), which is crucial for communicating ideas in a discourse. An experimenter described the uses of daily-life objects to 6- to 12-year-old children both orally and with gestures. The children were then asked to describe how they performed daily activities using those objects. All children gestured. A gesture identified a nonpresent referent if it was produced in the same location that had previously been established by the experimenter. Children with ASD gestured at the specific locations less often than typically developing children. Verbal and spatial memory were positively correlated with the ability to produce referent-identifying gestures for all children. However, the positive correlation between Raven's Children Progressive Matrices score and the production of referent-identifying gestures was found only in children with ASD. Children with ASD might be less able to perceive and produce referent-identifying gestures and may rely more heavily on visual-spatial skills in producing referent-identifying gestures. The results have clinical implications for designing an intervention program to enhance the ability of children with ASD to communicate about nonpresent objects with gestures.

  11. Gesture's role in speaking, learning, and creating language.

    Science.gov (United States)

    Goldin-Meadow, Susan; Alibali, Martha Wagner

    2013-01-01

    When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.

  12. Development of a Wearable Controller for Gesture-Recognition-Based Applications Using Polyvinylidene Fluoride.

    Science.gov (United States)

    Van Volkinburg, Kyle; Washington, Gregory

    2017-08-01

    This paper reports on a wearable gesture-based controller fabricated using the sensing capabilities of the flexible thin-film piezoelectric polymer polyvinylidene fluoride (PVDF) which is shown to repeatedly and accurately discern, in real time, between right and left hand gestures. The PVDF is affixed to a compression sleeve worn on the forearm to create a wearable device that is flexible, adaptable, and highly shape conforming. Forearm muscle movements, which drive hand motions, are detected by the PVDF which outputs its voltage signal to a developed microcontroller-based board and processed by an artificial neural network that was trained to recognize the generated voltage profile of right and left hand gestures. The PVDF has been spatially shaded (etched) in such a way as to increase sensitivity to expected deformations caused by the specific muscles employed in making the targeted right and left gestures. The device proves to be exceptionally accurate both when positioned as intended and when rotated and translated on the forearm.

  13. Co-speech gestures influence neural activity in brain regions associated with processing semantic information.

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Hasson, Uri; Skipper, Jeremy I; Small, Steven L

    2009-11-01

    Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.

  14. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children's Understanding.

    Science.gov (United States)

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and

  15. Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures.

    Science.gov (United States)

    Cooperrider, Kensy; Gentner, Dedre; Goldin-Meadow, Susan

    2016-01-01

    How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase ) and sometimes more complex relational structures (e.g., negative feedback ). Moreover, over the course of their explanations, participants' gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.

  16. Sistem Gesture Accelerometer dengan Metode Fast Dynamic Time Warping (FastDTW

    Directory of Open Access Journals (Sweden)

    Sam Farisa Chaerul Haviana

    2016-01-01

    Full Text Available In the modern environment, the interaction between humans and computers require a more natural form of interaction. Therefore, it is important to be able to build a system that can meet these demands, such as by building a hand gesture recognition system or gesture to create a more natural form of interaction. This study aims to design a smartphone’s accelerometer gesture system as human computer interaction interfaces using FastDTW (Fast Dynamic Time Warping.The result of this study is form of gesture interaction which implemented in a system that can make the process of recognition of the human hand movements based on a smartphone accelerometer which generates a command to run the media player application functions as a case study. FastDTW as the development of Dynamic Time Warping method (DTW is able to compute faster than DTW and have an accuracy approaching DTW. From the test results, FastDTW show a fairly high degree of accuracy reached 86% and showed a better computing speed compared to DTW   Keywords: Human and Computer Interaction, Accelerometer-based gesture, FastDTW, Media player application function

  17. The Role of Gestures in a Teacher-Student-Discourse about Atoms

    Science.gov (United States)

    Abels, Simone

    2016-01-01

    Recent educational research emphasises the importance of analysing talk and gestures to come to an understanding about students' conceptual learning. Gestures are perceived as complex hand movements being equivalent to other language modes. They can convey experienceable as well as abstract concepts. As well as technical language, gestures…

  18. Integration of speech and gesture in aphasia.

    Science.gov (United States)

    Cocks, Naomi; Byrne, Suzanne; Pritchard, Madeleine; Morgan, Gary; Dipper, Lucy

    2018-02-07

    Information from speech and gesture is often integrated to comprehend a message. This integration process requires the appropriate allocation of cognitive resources to both the gesture and speech modalities. People with aphasia are likely to find integration of gesture and speech difficult. This is due to a reduction in cognitive resources, a difficulty with resource allocation or a combination of the two. Despite it being likely that people who have aphasia will have difficulty with integration, empirical evidence describing this difficulty is limited. Such a difficulty was found in a single case study by Cocks et al. in 2009, and is replicated here with a greater number of participants. To determine whether individuals with aphasia have difficulties understanding messages in which they have to integrate speech and gesture. Thirty-one participants with aphasia (PWA) and 30 control participants watched videos of an actor communicating a message in three different conditions: verbal only, gesture only, and verbal and gesture message combined. The message related to an action in which the name of the action (e.g., 'eat') was provided verbally and the manner of the action (e.g., hands in a position as though eating a burger) was provided gesturally. Participants then selected a picture that 'best matched' the message conveyed from a choice of four pictures which represented a gesture match only (G match), a verbal match only (V match), an integrated verbal-gesture match (Target) and an unrelated foil (UR). To determine the gain that participants obtained from integrating gesture and speech, a measure of multimodal gain (MMG) was calculated. The PWA were less able to integrate gesture and speech than the control participants and had significantly lower MMG scores. When the PWA had difficulty integrating, they more frequently selected the verbal match. The findings suggest that people with aphasia can have difficulty integrating speech and gesture in order to obtain

  19. Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations.

    Science.gov (United States)

    Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M

    2015-03-01

    Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Gesture as a Resource for Intersubjectivity in Second-Language Learning Situations

    Science.gov (United States)

    Belhiah, Hassan

    2013-01-01

    This study documents the role of hand gestures in achieving mutual understanding in second-language learning situations. The study tracks the way gesture is coordinated with talk in tutorials between two Korean students and their American teachers. The study adopts an interactional approach to the study of participants' talk and gestural…

  1. Beat gestures help preschoolers recall and comprehend discourse information.

    Science.gov (United States)

    Llanes-Coromina, Judith; Vilà-Giménez, Ingrid; Kushch, Olga; Borràs-Comes, Joan; Prieto, Pilar

    2018-08-01

    Although the positive effects of iconic gestures on word recall and comprehension by children have been clearly established, less is known about the benefits of beat gestures (rhythmic hand/arm movements produced together with prominent prosody). This study investigated (a) whether beat gestures combined with prosodic information help children recall contrastively focused words as well as information related to those words in a child-directed discourse (Experiment 1) and (b) whether the presence of beat gestures helps children comprehend a narrative discourse (Experiment 2). In Experiment 1, 51 4-year-olds were exposed to a total of three short stories with contrastive words presented in three conditions, namely with prominence in both speech and gesture, prominence in speech only, and nonprominent speech. Results of a recall task showed that (a) children remembered more words when exposed to prominence in both speech and gesture than in either of the other two conditions and that (b) children were more likely to remember information related to those words when the words were associated with beat gestures. In Experiment 2, 55 5- and 6-year-olds were presented with six narratives with target items either produced with prosodic prominence but no beat gestures or produced with both prosodic prominence and beat gestures. Results of a comprehension task demonstrated that stories told with beat gestures were comprehended better by children. Together, these results constitute evidence that beat gestures help preschoolers not only to recall discourse information but also to comprehend it. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Chaotic Music Generation System Using Music Conductor Gesture

    OpenAIRE

    Chen, Shuai; Maeda, Yoichiro; Takahashi, Yasutake

    2013-01-01

    In the research of interactive music generation, we propose a music generation method, that the computer generates the music, under the recognition of human music conductor's gestures.In this research, the generated music is tuned by the recognized gestures for the parameters of the network of chaotic elements in real time. The music conductor's hand motions are detected by Microsoft Kinect in this system. Music theories are embedded in the algorithm, as a result, the generated music will be ...

  3. Construct validity for eye-hand coordination skill on a virtual reality laparoscopic surgical simulator.

    Science.gov (United States)

    Yamaguchi, Shohei; Konishi, Kozo; Yasunaga, Takefumi; Yoshida, Daisuke; Kinjo, Nao; Kobayashi, Kiichiro; Ieiri, Satoshi; Okazaki, Ken; Nakashima, Hideaki; Tanoue, Kazuo; Maehara, Yoshihiko; Hashizume, Makoto

    2007-12-01

    This study was carried out to investigate whether eye-hand coordination skill on a virtual reality laparoscopic surgical simulator (the LAP Mentor) was able to differentiate among subjects with different laparoscopic experience and thus confirm its construct validity. A total of 31 surgeons, who were all right-handed, were divided into the following two groups according to their experience as an operator in laparoscopic surgery: experienced surgeons (more than 50 laparoscopic procedures) and novice surgeons (fewer than 10 laparoscopic procedures). The subjects were tested using the eye-hand coordination task of the LAP Mentor, and performance was compared between the two groups. Assessment of the laparoscopic skills was based on parameters measured by the simulator. The experienced surgeons completed the task significantly faster than the novice surgeons. The experienced surgeons also achieved a lower number of movements (NOM), better economy of movement (EOM) and faster average speed of the left instrument than the novice surgeons, whereas there were no significant differences between the two groups for the NOM, EOM and average speed of the right instrument. Eye-hand coordination skill of the nondominant hand, but not the dominant hand, measured using the LAP Mentor was able to differentiate between subjects with different laparoscopic experience. This study also provides evidence of construct validity for eye-hand coordination skill on the LAP Mentor.

  4. Asymmetric dynamic attunement of speech and gestures in the construction of children’s understanding

    Directory of Open Access Journals (Sweden)

    Lisette eDe Jonge-Hoekstra

    2016-03-01

    Full Text Available As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6 from Kindergarten (n = 5 and first grade (n = 7 participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on 1 the temporal relation between gestures and speech, 2 the relative strength and direction of the interaction between gestures and speech, 3 the relative strength and direction between gestures and speech for different levels of understanding, and 4 relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal asymmetry in the gestures-speech interaction. For younger children, the balance leans more towards gestures leading speech in time, while the balance leans more towards speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools’ language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between

  5. Looking at eye dominance from a different angle: is sighting strength related to hand preference?

    Science.gov (United States)

    Carey, David P; Hutchinson, Claire V

    2013-10-01

    Sighting dominance (the behavioural preference for one eye over the other under monocular viewing conditions) has traditionally been thought of as a robust individual trait. However, Khan and Crawford (2001) have shown that, under certain viewing conditions, eye preference reverses as a function of horizontal gaze angle. Remarkably, the reversal of sighting from one eye to the other depends on which hand is used to reach out and grasp the target. Their procedure provides an ideal way to measure the strength of monocular preference for sighting, which may be related to other indicators of hemispheric specialisation for speech, language and motor function. Therefore, we hypothesised that individuals with consistent side preferences (e.g., right hand, right eye) should have more robust sighting dominance than those with crossed lateral preferences. To test this idea, we compared strength of eye dominance in individuals who are consistently right or left sided for hand and foot preference with those who are not. We also modified their procedure in order to minimise a potential image size confound, suggested by Banks et al. (2004) as an explanation of Khan and Crawford's results. We found that the sighting dominance switch occurred at similar eccentricities when we controlled for effects of hand occlusion and target size differences. We also found that sighting dominance thresholds change predictably with the hand used. However, we found no evidence for relationships between strength of hand preference as assessed by questionnaire or by pegboard performance and strength of sighting dominance. Similarly, participants with consistent hand and foot preferences did not show stronger eye preference as assessed using the Khan and Crawford procedure. These data are discussed in terms of indirect relationships between sighting dominance, hand preference and cerebral specialisation for language and motor control. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. From action to abstraction: Gesture as a mechanism of change.

    Science.gov (United States)

    Goldin-Meadow, Susan

    2015-12-01

    Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. But gesture can do more than reflect ideas-it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ-gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.

  7. A Modified Tactile Brush Algorithm for Complex Touch Gestures

    Energy Technology Data Exchange (ETDEWEB)

    Ragan, Eric [Texas A& M University

    2015-01-01

    Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.

  8. Getting to the elephants: Gesture and preschoolers' comprehension of route direction information.

    Science.gov (United States)

    Austin, Elizabeth E; Sweller, Naomi

    2017-11-01

    During early childhood, children find spatial tasks such as following novel route directions challenging. Spatial tasks place demands on multiple cognitive processes, including language comprehension and memory, at a time in development when resources are limited. As such, gestures accompanying route directions may aid comprehension and facilitate task performance by scaffolding cognitive processes, including language and memory processing. This study examined the effect of presenting gesture during encoding on spatial task performance during early childhood. Three- to five-year-olds were presented with verbal route directions through a zoo-themed spatial array and, depending on assigned condition (no gesture, beat gesture, or iconic/deictic gesture), accompanying gestures. Children presented with verbal route directions accompanied by a combination of iconic (pantomime) and deictic (pointing) gestures verbally recalled more than children presented with beat gestures (rhythmic hand movements) or no gestures accompanying the route directions. The presence of gesture accompanying route directions similarly influenced physical route navigation, such that children presented with gesture (beat, pantomime, and pointing) navigated the route more accurately than children presented with no gestures. Across all gesture conditions, location information (e.g., the penguin pond) was recalled more than movement information (e.g., go around) and descriptive information (e.g., bright red). These findings suggest that speakers' gestures accompanying spatial task information influence listeners' recall and task performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Bimanual Gesture Imitation in Alzheimer's Disease.

    Science.gov (United States)

    Sanin, G Nter; Benke, Thomas

    2017-01-01

    Unimanual gesture production or imitation has often been studied in Alzheimer's disease (AD) during apraxia testing. In the present study, it was hypothesized that bimanual motor tasks may be a sensitive method to detect impairments of motor cognition in AD due to increased demands on the cognitive system. We investigated bimanual, meaningless gesture imitation in 45 AD outpatients, 38 subjects with mild cognitive impairment (MCI), and 50 normal controls (NC) attending a memory clinic. Participants performed neuropsychological background testing and three tasks: the Interlocking Finger Test (ILF), Imitation of Alternating Hand Movements (AHM), and Bimanual Rhythm Tapping (BRT). The tasks were short and easy to administer. Inter-rater reliability was high across all three tests. AD patients performed significantly poorer than NC and MCI participants; a deficit to imitate bimanual gestures was rarely found in MCI and NC participants. Sensitivity to detect AD ranged from 0.5 and 0.7, specificity beyond 0.9. ROC analyses revealed good diagnostic accuracy (0.77 to 0.92). Impairment to imitate bimanual gestures was mainly predicted by diagnosis and disease severity. Our findings suggest that an impairment to imitate bimanual, meaningless gestures is a valid disease marker of mild to moderate AD and can easily be assessed in memory clinic settings. Based on our preliminary findings, it appears to be a separate impairment which can be distinguished from other cognitive deficits.

  10. Prototyping with your hands: the many roles of gesture in the communication of design concepts

    DEFF Research Database (Denmark)

    Cash, Philip; Maier, Anja

    2016-01-01

    There is an on-going focus exploring the use of gesture in design situations; however, there are still significant questions as to how this is related to the understanding and communication of design concepts. This work explores the use of gesture through observing and video-coding four teams of ...

  11. User-independent accelerometer-based gesture recognition for mobile devices

    Directory of Open Access Journals (Sweden)

    Eduardo METOLA

    2013-07-01

    Full Text Available Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone

  12. Human Classification Based on Gestural Motions by Using Components of PCA

    International Nuclear Information System (INIS)

    Aziz, Azri A; Wan, Khairunizam; Za'aba, S K; Shahriman A B; Asyekin H; Zuradzman M R; Adnan, Nazrul H

    2013-01-01

    Lately, a study of human capabilities with the aim to be integrated into machine is the famous topic to be discussed. Moreover, human are bless with special abilities that they can hear, see, sense, speak, think and understand each other. Giving such abilities to machine for improvement of human life is researcher's aim for better quality of life in the future. This research was concentrating on human gesture, specifically arm motions for differencing the individuality which lead to the development of the hand gesture database. We try to differentiate the human physical characteristic based on hand gesture represented by arm trajectories. Subjects are selected from different type of the body sizes, and then acquired data undergo resampling process. The results discuss the classification of human based on arm trajectories by using Principle Component Analysis (PCA)

  13. A tale of two hands: Children's early gesture use in narrative production predicts later narrative structure in speech

    Science.gov (United States)

    Demir, Özlem Ece; Levine, Susan C.; Goldin-Meadow, Susan

    2014-01-01

    Speakers of all ages spontaneously gesture as they talk. These gestures predict children's milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age 5 and then again at 6, 7, and 8. Children's narrative structure in speech improved across these ages. At age 5, many of the children expressed a character's viewpoint in gesture, and these children were more likely to tell better-structured stories at the later ages than children who did not produce character-viewpoint gestures at age 5. In contrast, framing narratives from a character's perspective in speech at age 5 did not predict later narrative structure in speech. Gesture thus continues to act as a harbinger of change even as it assumes new roles in relation to discourse. PMID:25088361

  14. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  15. Mnemonic Effect of Iconic Gesture and Beat Gesture in Adults and Children: Is Meaning in Gesture Important for Memory Recall?

    Science.gov (United States)

    So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low

    2012-01-01

    Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…

  16. An interactive VR system based on full-body tracking and gesture recognition

    Science.gov (United States)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  17. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  18. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  19. Study of hand signs in Judeo-Christian art.

    Science.gov (United States)

    Ram, Ashwin N; Chung, Kevin C

    2008-09-01

    Hand gestures play a crucial role in religious art. An examination of Judeo-Christian art finds an ecclesiastical language that is concealed in metaphors and expressed by unique hand gestures. Many of these hand signs convey messages that are not familiar to most people admiring these paintings. Investigating the history and classifying some of the predominant hand signs found in Judeo-Christian art might serve to stimulate discussion concerning the many nuances of symbolic art. This presentation examines the meaning behind 8 common hand signs in Judeo-Christian art.

  20. A gesture-controlled projection display for CT-guided interventions.

    Science.gov (United States)

    Mewes, A; Saalfeld, P; Riabikin, O; Skalej, M; Hansen, C

    2016-01-01

    The interaction with interventional imaging systems within a sterile environment is a challenging task for physicians. Direct physician-machine interaction during an intervention is rather limited because of sterility and workspace restrictions. We present a gesture-controlled projection display that enables a direct and natural physician-machine interaction during computed tomography (CT)-based interventions. Therefore, a graphical user interface is projected on a radiation shield located in front of the physician. Hand gestures in front of this display are captured and classified using a leap motion controller. We propose a gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection. Our methods were evaluated in a clinically oriented user study with 12 participants. The results of the performed user study confirm that the display and the underlying interaction concept are accepted by clinical users. The recognition of the gestures is robust, although there is potential for improvements. The gesture training times are less than 10 min, but vary heavily between the participants of the study. The developed gestures are connected logically to the intervention software and intuitive to use. The proposed gesture-controlled projection display counters current thinking, namely it gives the radiologist complete control of the intervention software. It opens new possibilities for direct physician-machine interaction during CT-based interventions and is well suited to become an integral part of future interventional suites.

  1. P1-20: The Relation of Eye and Hand Movement during Multimodal Recall Memory

    Directory of Open Access Journals (Sweden)

    Eun-Sol Kim

    2012-10-01

    Full Text Available Eye and hand movement tracking has been proven to be a successful tool and is widely used to figure out characteristics of human cognition in language or visual processing (Just & Carpenter, 1976 Cognitive Psychology 8441–480. Eye movement has proven to be a successful measure to figure out characteristics of human language and visual processing (Rayner, 1998 Psychological Bulletin 124(3 372–422. Recently, mouse tracking was used for social-cognition-like categorization of sex-atypical faces and studying spoken-language processes (Magnuson, 2005 PNAS 102(28 9995–9996; Spivey et al., 2005 PNAS 102 10393–10398. Here, we present a framework that uses both eye gaze and hand movement simultaneously for analyzing the relation of them during memory retrieval. We tracked eye and mouse movements when the subject was watching a drama and playing a multimodal memory game (MMG, a cognitive task designed to investigate the recall memory mechanisms in watching video dramas (Zhang, 2009 AAAI 2009 Spring Symposium: Agents that Learn from Human Teachers 144–149. Experimental results show that eye tracking and mouse tracking provide complementary information about underlying cognitive processes. Also, we found some interesting patterns in eye-hand movement during multimodal memory recall.

  2. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    Science.gov (United States)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  3. Hybrid gesture recognition system for short-range use

    Science.gov (United States)

    Minagawa, Akihiro; Fan, Wei; Katsuyama, Yutaka; Takebe, Hiroaki; Ozawa, Noriaki; Hotta, Yoshinobu; Sun, Jun

    2012-03-01

    In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the camera's field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.

  4. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. How early do children understand gesture-speech combinations with iconic gestures?

    Science.gov (United States)

    Stanfield, Carmen; Williamson, Rebecca; Ozçalişkan, Seyda

    2014-03-01

    Children understand gesture+speech combinations in which a deictic gesture adds new information to the accompanying speech by age 1;6 (Morford & Goldin-Meadow, 1992; 'push'+point at ball). This study explores how early children understand gesture+speech combinations in which an iconic gesture conveys additional information not found in the accompanying speech (e.g., 'read'+BOOK gesture). Our analysis of two- to four-year-old children's responses in a gesture+speech comprehension task showed that children grasp the meaning of iconic co-speech gestures by age three and continue to improve their understanding with age. Overall, our study highlights the important role gesture plays in language comprehension as children learn to unpack increasingly complex communications addressed to them at the early ages.

  6. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  7. Nearest neighbour classification of Indian sign language gestures ...

    Indian Academy of Sciences (India)

    In the ideal case, a gesture recognition ... Every geographical region has developed its own sys- ... et al [10] present a study on vision-based static hand shape .... tures, and neural networks for recognition. ..... We used the city-block dis-.

  8. The Role of Embodiment and Individual Empathy Levels in Gesture Comprehension.

    Science.gov (United States)

    Jospe, Karine; Flöel, Agnes; Lavidor, Michal

    2017-01-01

    Research suggests that the action-observation network is involved in both emotional-embodiment (empathy) and action-embodiment (imitation) mechanisms. Here we tested whether empathy modulates action-embodiment, hypothesizing that restricting imitation abilities will impair performance in a hand gesture comprehension task. Moreover, we hypothesized that empathy levels will modulate the imitation restriction effect. One hundred twenty participants with a range of empathy scores performed gesture comprehension under restricted and unrestricted hand conditions. Empathetic participants performed better under the unrestricted compared to the restricted condition, and compared to the low empathy participants. Remarkably however, the latter showed the exactly opposite pattern and performed better under the restricted condition. This pattern was not found in a facial expression recognition task. The selective interaction of embodiment restriction and empathy suggests that empathy modulates the way people employ embodiment in gesture comprehension. We discuss the potential of embodiment-induced therapy to improve empathetic abilities in individuals with low empathy.

  9. User Interface Aspects of a Human-Hand Simulation System

    Directory of Open Access Journals (Sweden)

    Beifang Yi

    2005-10-01

    Full Text Available This paper describes the user interface design for a human-hand simulation system, a virtual environment that produces ground truth data (life-like human hand gestures and animations and provides visualization support for experiments on computer vision-based hand pose estimation and tracking. The system allows users to save time in data generation and easily create any hand gestures. We have designed and implemented this user interface with the consideration of usability goals and software engineering issues.

  10. Drawing from Memory: Hand-Eye Coordination at Multiple Scales

    Science.gov (United States)

    Spivey, Michael J.

    2013-01-01

    Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing and building artifacts. In the present study, the task of sketching images of natural scenes from memory was used to examine and compare perceptual-motor interactions on shorter and longer timescales. Eye and pen trajectories were found to be coordinated in time on shorter timescales during drawing, and also on longer timescales spanning study and drawing periods. The latter type of coordination was found by developing a purely spatial analysis that yielded measures of similarity between images, eye trajectories, and pen trajectories. These results challenge the notion that coordination only unfolds on short timescales. Rather, the task of drawing from memory evokes perceptual-motor encodings of visual images that preserve coarse-grained spatial information over relatively long timescales as well. PMID:23554894

  11. Autonomous learning in gesture recognition by using lobe component analysis

    Science.gov (United States)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  12. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    Science.gov (United States)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  13. A New Profile Shape Matching Stereovision Algorithm for Real-time Human Pose and Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2014-02-01

    Full Text Available This paper presents a new profile shape matching stereovision algorithm that is designed to extract 3D information in real time. This algorithm obtains 3D information by matching profile intensity shapes of each corresponding row of the stereo image pair. It detects the corresponding matching patterns of the intensity profile rather than the intensity values of individual pixels or pixels in a small neighbourhood. This approach reduces the effect of the intensity and colour variations caused by lighting differences. As with all real-time vision algorithms, there is always a trade-off between accuracy and processing speed. This algorithm achieves a balance between the two to produce accurate results for real-time applications. To demonstrate its performance, the proposed algorithm is tested for human pose and hand gesture recognition to control a smart phone and an entertainment system.

  14. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  15. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  16. Mainstreaming gesture based interfaces

    Directory of Open Access Journals (Sweden)

    David Procházka

    2013-01-01

    Full Text Available Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System. Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.

  17. Eyes, Grip and Gesture as Objective Indicators of Intentions and Attention

    DEFF Research Database (Denmark)

    Mortensen, Ditte Hvas

    This poster abstract presents the first part of a study concerning the use of information about gaze, grip and gesture to create non-command interaction. The experiment reported here seeks to establish the occurrence of patterns in nonverbal communication,  which may be used in an activity aware...

  18. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  19. Modeling and evaluation of hand-eye coordination of surgical robotic system on task performance.

    Science.gov (United States)

    Gao, Yuanqian; Wang, Shuxin; Li, Jianmin; Li, Aimin; Liu, Hongbin; Xing, Yuan

    2017-12-01

    Robotic-assisted minimally invasive surgery changes the direct hand and eye coordination in traditional surgery to indirect instrument and camera coordination, which affects the ergonomics, operation performance, and safety. A camera, two instruments, and a target, as the descriptors, are used to construct the workspace correspondence and geometrical relationships in a surgical operation. A parametric model with a set of parameters is proposed to describe the hand-eye coordination of the surgical robot. From the results, optimal values and acceptable ranges of these parameters are identified from two tasks. A 90° viewing angle had the longest completion time; 60° instrument elevation angle and 0° deflection angle had better performance; there is no significant difference among manipulation angles and observing distances on task performance. This hand-eye coordination model provides evidence for robotic design, surgeon training, and robotic initialization to achieve dexterous and safe manipulation in surgery. Copyright © 2017 John Wiley & Sons, Ltd.

  20. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    Science.gov (United States)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  1. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    Science.gov (United States)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  2. How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis

    OpenAIRE

    Kita, Sotaro; Alibali, M. W.; Chu, Mingyuan

    2017-01-01

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-o...

  3. Gestures, vocalizations, and memory in language origins.

    Science.gov (United States)

    Aboitiz, Francisco

    2012-01-01

    THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.

  4. Gesture Interaction Browser-Based 3D Molecular Viewer.

    Science.gov (United States)

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  5. The Analysis of the Possibility of Using Viola-Jones Algorithm to Recognise Hand Gestures in Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Golański Piotr

    2017-08-01

    Full Text Available The article concerns the issue of applying computer-aided systems of the maintenance of technical objects in difficult conditions. Difficult conditions shall be understood as these in which the maintenance takes place in a specific location making it hard or even preventing from using a computer. In these cases computers integrated with workwear should be used, the so-called wearable computers, with which the communication is possible by using hand gestures. The results of the analysis of the usefulness of one of methods of image recognition based on Viola-Jones algorithm were described. This algorithm enables to obtain the model of recognised image which might be used as a pattern in the application programme detecting a certain image.

  6. Dexterous hand gestures recognition based on low-density sEMG signals for upper-limb forearm amputees

    Directory of Open Access Journals (Sweden)

    John Jairo Villarejo Mayor

    2017-08-01

    Full Text Available Abstract Introduction Intuitive prosthesis control is one of the most important challenges in order to reduce the user effort in learning how to use an artificial hand. This work presents the development of a novel method for pattern recognition of sEMG signals able to discriminate, in a very accurate way, dexterous hand and fingers movements using a reduced number of electrodes, which implies more confidence and usability for amputees. Methods The system was evaluated for ten forearm amputees and the results were compared with the performance of able-bodied subjects. Multiple sEMG features based on fractal analysis (detrended fluctuation analysis and Higuchi’s fractal dimension combined with traditional magnitude-based features were analyzed. Genetic algorithms and sequential forward selection were used to select the best set of features. Support vector machine (SVM, K-nearest neighbors (KNN and linear discriminant analysis (LDA were analyzed to classify individual finger flexion, hand gestures and different grasps using four electrodes, performing contractions in a natural way to accomplish these tasks. Statistical significance was computed for all the methods using different set of features, for both groups of subjects (able-bodied and amputees. Results The results showed average accuracy up to 99.2% for able-bodied subjects and 98.94% for amputees using SVM, followed very closely by KNN. However, KNN also produces a good performance, as it has a lower computational complexity, which implies an advantage for real-time applications. Conclusion The results show that the method proposed is promising for accurately controlling dexterous prosthetic hands, providing more functionality and better acceptance for amputees.

  7. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  8. Grids and Gestures: A Comics Making Exercise

    Science.gov (United States)

    Sousanis, Nick

    2015-01-01

    Grids and Gestures is an exercise intended to offer participants insight into a comics maker's decision-making process for composing the entire page through the hands-on activity of making an abstract comic. It requires no prior drawing experience and serves to help reexamine what it means to draw. In addition to a description of how to proceed…

  9. Development of Pointing Gestures in Children With Typical and Delayed Language Acquisition.

    Science.gov (United States)

    Lüke, Carina; Ritterfeld, Ute; Grimminger, Angela; Liszkowski, Ulf; Rohlfing, Katharina J

    2017-11-09

    This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the influence of caregivers' gestural and verbal input on children's communicative development. Thirty children with TD and 10 children with LD were observed together with their primary caregivers in a seminatural setting in 5 sessions between the ages of 12 and 21 months. Language skills were assessed at 24 months. Compared with children with TD, children with LD used fewer index-finger points at 12 and 14 months but more pointing gestures in total at 21 months. There were no significant differences in verbal or gestural input between caregivers of children with or without LD. Using more index-finger points at the beginning of the second year of life is associated with TD, whereas using more pointing gestures at the end of the second year of life is associated with delayed acquisition. Neither the verbal nor gestural input of caregivers accounted for differences in children's skills.

  10. Assessment of eye, hand and male gonadal skin dose in radiotherapy

    International Nuclear Information System (INIS)

    Pushap, M.P.S.

    1979-01-01

    An attempt has been made to gauge the dose to (1) the eye, (2) the skin of the hands and (3) the gonads from radiotherapy of other parts of the body. The study has been done on actual male patients at the Jorjani Medical Centre, Tehran. The study, indicated high dose to the eye lid i.e. about 3% of the tumour dose in the case of head irradiation. The eyes and gonads lie at unequal distances from thorax, so are their doses. It is further emphasised that a minimum dose of 400 rad in three weeks to one month has been reported to be cataractogenic in man. A 50% incidence of progressive loss of vision with a dose of 750 rad to 1000 rad in three weeks to three months time has been observed. If appropriate techniques are not employed to shield the eye, even from stray radiation, such limits may easily be reached. (K.B.)

  11. Hand-Eye LRF-Based Iterative Plane Detection Method for Autonomous Robotic Welding

    Directory of Open Access Journals (Sweden)

    Sungmin Lee

    2015-12-01

    Full Text Available This paper proposes a hand-eye LRF-based (laser range finder welding plane-detection method for autonomous robotic welding in the field of shipbuilding. The hand-eye LRF system consists of a 6 DOF manipulator and an LRF attached to the wrist of the manipulator. The welding plane is detected by the LRF with only the wrist's rotation to minimize a mechanical error caused by the manipulator's motion. A position on the plane is determined as an average position of the detected points on the plane, and a normal vector to the plane is determined by applying PCA (principal component analysis to the detected points. In this case, the accuracy of the detected plane is analysed by simulations with respect to the wrist's angle interval and the plane angle. As a result of the analysis, an iterative plane-detection method with the manipulator's alignment motion is proposed to improve the performance of plane detection. For verifying the feasibility and effectiveness of the proposed plane-detection method, experiments are carried out with a prototype of the hand-eye LRF-based system, which consists of a 1 DOF wrist's joint, an LRF system and a rotatable plane. In addition, the experimental results of the PCA-based plane detection method are compared with those of the two representative plane-detection methods, based on RANSAC (RANdom SAmple Consensus and the 3D Hough transform in both accuracy and computation time's points of view.

  12. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  13. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    Science.gov (United States)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  14. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  15. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  16. Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing.

    Science.gov (United States)

    Demir-Lira, Özlem Ece; Asaridou, Salomi S; Raja Beharelle, Anjali; Holt, Anna E; Goldin-Meadow, Susan; Small, Steven L

    2018-03-08

    Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture-speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., "pet" + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., "bird" + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture-speech integration in children overlaps with-but is broader than-the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration. © 2018 John Wiley & Sons Ltd.

  17. Grounded Blends and Mathematical Gesture Spaces: Developing Mathematical Understandings via Gestures

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    This paper examines how a person's gesture space can become endowed with mathematical meaning associated with mathematical spaces and how the resulting mathematical gesture space can be used to communicate and interpret mathematical features of gestures. We use the theory of grounded blends to analyse a case study of two teachers who used gestures…

  18. Angle-of-arrival-based gesture recognition using ultrasonic multi-frequency signals

    KAUST Repository

    Chen, Hui; Ballal, Tarig; Saad, Mohamed; Al-Naffouri, Tareq Y.

    2017-01-01

    transducer. The 2-D angles of the moving hand are estimated using multi-frequency signals captured by a fixed receiver array. A simple redundant dictionary matching classifier is designed to recognize gestures representing the numbers from `0' to `9

  19. Language, Gesture, Action! A Test of the Gesture as Simulated Action Framework

    Science.gov (United States)

    Hostetter, Autumn B.; Alibali, Martha W.

    2010-01-01

    The Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, 2008) holds that representational gestures are produced when actions are simulated as part of thinking and speaking. Accordingly, speakers should gesture more when describing images with which they have specific physical experience than when describing images that are less…

  20. Hand-Eye Calibration and Inverse Kinematics of Robot Arm using Neural Network

    DEFF Research Database (Denmark)

    Wu, Haiyan; Tizzano, Walter; Andersen, Thomas Timm

    2013-01-01

    Traditional technologies for solving hand-eye calibration and inverse kinematics are cumbersome and time consuming due to the high nonlinearity in the models. An alternative to the traditional approaches is the articial neural network inspired by the remarkable abilities of the animals in dierent...

  1. What Iconic Gesture Fragments Reveal about Gesture-Speech Integration: When Synchrony Is Lost, Memory Can Help

    Science.gov (United States)

    Obermeier, Christian; Holle, Henning; Gunter, Thomas C.

    2011-01-01

    The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive…

  2. Impaired imitation of gestures in mild dementia: comparison of dementia with Lewy bodies, Alzheimer's disease and vascular dementia.

    Science.gov (United States)

    Nagahama, Yasuhiro; Okina, Tomoko; Suzuki, Norio

    2015-11-01

    To examine whether imitation of gestures provided useful information to diagnose early dementia in elderly patients. Imitation of finger and hand gestures was evaluated in patients with mild dementia; 74 patients had dementia with Lewy bodies (DLB), 100 with Alzheimer's disease (AD) and 52 with subcortical vascular dementia (SVaD). Significantly, more patients with DLB (32.4%) compared with patients with AD (5%) or SVaD (11.5%) had an impaired ability to imitate finger gestures bilaterally. Also, significantly, more patients with DLB (36.5%) compared with patients with AD (5%) or SVaD (15.4%) had lower mean scores of both hands. In contrast, impairment of the imitation of bimanual gestures was comparable among the three patient groups (DLB 50%, AD 42%, SVaD 42.3%). Our study revealed that imitation of bimanual gestures was impaired non-specifically in about half of the patients with mild dementia, whereas imitation of finger gestures was significantly more impaired in patients with early DLB than in those with AD or SVaD. Although the sensitivity was not high, the imitation tasks may provide additional information for diagnosis of mild dementia, especially for DLB. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. The relationship between temperamental traits and the level of performance of an eye-hand co-ordination task in jet pilots.

    Science.gov (United States)

    Biernacki, Marcin; Tarnowski, Adam

    2008-01-01

    When assessing the psychological suitability for the profession of a pilot, it is important to consider personality traits and psychomotor abilities. Our study aimed at estimating the role of temperamental traits as components of pilots' personality in eye-hand co-ordination. The assumption was that differences in the escalation of the level of temperamental traits, as measured with the Formal Characteristic of Behaviour-Temperament Inventory (FCB-TI), will significantly influence eye-hand co-ordination. At the level of general scores, enhanced briskness proved to be the most important trait for eye-hand co-ordination. An analysis of partial scores additionally underlined the importance of sensory sensitivity, endurance and activity. The application of eye-hand co-ordination tasks, which involve energetic and temporal dimensions of performance, helped to disclose the role of biologically-based personality traits in psychomotor performance. The implication of these findings for selecting pilots is discussed.

  4. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    Science.gov (United States)

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  5. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  6. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    Science.gov (United States)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  7. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    Science.gov (United States)

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  8. Generating Control Commands From Gestures Sensed by EMG

    Science.gov (United States)

    Wheeler, Kevin R.; Jorgensen, Charles

    2006-01-01

    An effort is under way to develop noninvasive neuro-electric interfaces through which human operators could control systems as diverse as simple mechanical devices, computers, aircraft, and even spacecraft. The basic idea is to use electrodes on the surface of the skin to acquire electromyographic (EMG) signals associated with gestures, digitize and process the EMG signals to recognize the gestures, and generate digital commands to perform the actions signified by the gestures. In an experimental prototype of such an interface, the EMG signals associated with hand gestures are acquired by use of several pairs of electrodes mounted in sleeves on a subject s forearm (see figure). The EMG signals are sampled and digitized. The resulting time-series data are fed as input to pattern-recognition software that has been trained to distinguish gestures from a given gesture set. The software implements, among other things, hidden Markov models, which are used to recognize the gestures as they are being performed in real time. Thus far, two experiments have been performed on the prototype interface to demonstrate feasibility: an experiment in synthesizing the output of a joystick and an experiment in synthesizing the output of a computer or typewriter keyboard. In the joystick experiment, the EMG signals were processed into joystick commands for a realistic flight simulator for an airplane. The acting pilot reached out into the air, grabbed an imaginary joystick, and pretended to manipulate the joystick to achieve left and right banks and up and down pitches of the simulated airplane. In the keyboard experiment, the subject pretended to type on a numerical keypad, and the EMG signals were processed into keystrokes. The results of the experiments demonstrate the basic feasibility of this method while indicating the need for further research to reduce the incidence of errors (including confusion among gestures). Topics that must be addressed include the numbers and arrangements

  9. Grammatical Aspect and Gesture in French: A kinesiological approach

    Directory of Open Access Journals (Sweden)

    Доминик Бутэ

    2016-12-01

    Full Text Available In this paper, we defend the idea that research on Gesture with Speech can provide ways of studying speakers’ conceptualization of grammatical notions as they are speaking. Expressing an idea involves a dynamic interplay between our construal, shaped by the sensori-motoric and interactive experiences linked to that idea, the plurisemiotic means at our disposal for expressing it, and the linguistic category available for its expression in our language. By analyzing the expression of aspect in Speech with Gesture (GeSp in semi-guided oral interactions, we would like to make a new contribution to the field of aspect by exploring how speakers’ construal of aspectual differences grammaticalized in their language, may be enacted and visible in gesture. More specifically we want to see the degree to which event structure differences expressed in different grammatical aspects (perfective and imperfective correlate with kinesiological features of the gestures. To this end, we will focus on the speed and flow of the movements as well as on the segments involved (fingers, hand, forearm, arm, shoulder. A kinesiological approach to gestures enables us to analyze the movements of human bodies according to a biomechanical point of view that includes physiological features. This study is the first contribution focused on the links between speech and gesture in French in the domain of grammatical aspect. Grammatical aspect was defined by Comrie (1976 [1989] as involving the internal unfurling of the process, «[...] tense is a deictic category, i.e. locates situations in time, usually with reference to the present moment [...]. Aspect is not concerned with relating time of the situation to any other time-point, but rather with the internal temporal constituency of the one situation; one could state the difference as one between situation-internal time (aspect and situation-external time (tense » (Comrie, 1976 [1989]: 5. Can kinesic features express and make

  10. Robotic Eye-in-hand Calibration in an Uncalibrated Environment

    Directory of Open Access Journals (Sweden)

    Sebastian Van Delden

    2008-12-01

    Full Text Available The optical flow of high interest points in images of an uncalibrated scene is used to recover the camera orientation of an eye-in-hand robotic manipulator. The system is completely automated, iteratively performing a sequence of rotations and translations until the camera frame is aligned with the manipulator's world frame. The manipulator must be able to translate and rotate its end-effector with respect to its world frame. The system is implemented and being tested on a Stäubli RX60 manipulator using an off-the-shelf Logitech USB camera.

  11. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures.

    Science.gov (United States)

    Cherdieu, Mélaine; Palombi, Olivier; Gerber, Silvain; Troccaz, Jocelyne; Rochet-Capellan, Amélie

    2017-01-01

    Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.

  12. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures

    Science.gov (United States)

    Cherdieu, Mélaine; Palombi, Olivier; Gerber, Silvain; Troccaz, Jocelyne; Rochet-Capellan, Amélie

    2017-01-01

    Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories. PMID:29062287

  13. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures

    Directory of Open Access Journals (Sweden)

    Mélaine Cherdieu

    2017-10-01

    Full Text Available Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1 the integration of motor actions and knowledge may require sleep; (2 a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.

  14. Gesture Decoding Using ECoG Signals from Human Sensorimotor Cortex: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Yue Li

    2017-01-01

    Full Text Available Electrocorticography (ECoG has been demonstrated as a promising neural signal source for developing brain-machine interfaces (BMIs. However, many concerns about the disadvantages brought by large craniotomy for implanting the ECoG grid limit the clinical translation of ECoG-based BMIs. In this study, we collected clinical ECoG signals from the sensorimotor cortex of three epileptic participants when they performed hand gestures. The ECoG power spectrum in hybrid frequency bands was extracted to build a synchronous real-time BMI system. High decoding accuracy of the three gestures was achieved in both offline analysis (85.7%, 84.5%, and 69.7% and online tests (80% and 82%, tested on two participants only. We found that the decoding performance was maintained even with a subset of channels selected by a greedy algorithm. More importantly, these selected channels were mostly distributed along the central sulcus and clustered in the area of 3 interelectrode squares. Our findings of the reduced and clustered distribution of ECoG channels further supported the feasibility of clinically implementing the ECoG-based BMI system for the control of hand gestures.

  15. Needing a Safe Pair of Hands: Functioning and health-related quality of life in children with congenital hand differences

    NARCIS (Netherlands)

    M.S. Ardon (Monique )

    2014-01-01

    markdownabstract__Abstract__ Our hands are extensively used in everyday activities and are the primary means of interaction with our environment. We use our hands for eating, bathing, gesturing and in childhood they are one of our instruments to discover the world. Especially in the developing

  16. Optical coherence tomography based 1D to 6D eye-in-hand calibration

    DEFF Research Database (Denmark)

    Antoni, Sven Thomas; Otte, Christoph; Savarimuthu, Thiusius Rajeeth

    2017-01-01

    and based on this introduce pivot+d, a new 1D to 6D eye-in-hand calibration. We provide detailed results on the convergence and accuracy of our method and use translational and rotational ground truth to show that our methods allow for submillimeter positioning accuracy of an OCT beam with a robot.......e., it can be easily integrated with instruments. However, to use OCT for intra-operative guidance its spatial alignment needs to be established. Hence, we consider eye-in-hand calibration between the 1D OCT imaging and a 6D robotic position system. We present a method to perform pivot calibration for OCT....... For pivot calibration we observe a mean translational error of 0.5161 ± 0.4549 mm while pivot+d shows 0.3772 ± 0.2383 mm. Additionally, pivot+d improves rotation detection by about 8° when compared to pivot calibration....

  17. A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies.

    Science.gov (United States)

    Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca

    2017-04-15

    Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.

  18. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    Science.gov (United States)

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  19. Complementary Hand Responses Occur in Both Peri- and Extrapersonal Space.

    Directory of Open Access Journals (Sweden)

    Tim W Faber

    Full Text Available Human beings have a strong tendency to imitate. Evidence from motor priming paradigms suggests that people automatically tend to imitate observed actions such as hand gestures by performing mirror-congruent movements (e.g., lifting one's right finger upon observing a left finger movement; from a mirror perspective. Many observed actions however, do not require mirror-congruent responses but afford complementary (fitting responses instead (e.g., handing over a cup; shaking hands. Crucially, whereas mirror-congruent responses don't require physical interaction with another person, complementary actions often do. Given that most experiments studying motor priming have used stimuli devoid of contextual information, this space or interaction-dependency of complementary responses has not yet been assessed. To address this issue, we let participants perform a task in which they had to mirror or complement a hand gesture (fist or open hand performed by an actor depicted either within or outside of reach. In three studies, we observed faster reaction times and less response errors for complementary relative to mirrored hand movements in response to open hand gestures (i.e., 'hand-shaking' irrespective of the perceived interpersonal distance of the actor. This complementary effect could not be accounted for by a low-level spatial cueing effect. These results demonstrate that humans have a strong and automatic tendency to respond by performing complementary actions. In addition, our findings underline the limitations of manipulations of space in modulating effects of motor priming and the perception of affordances.

  20. Complementary Hand Responses Occur in Both Peri- and Extrapersonal Space.

    Science.gov (United States)

    Faber, Tim W; van Elk, Michiel; Jonas, Kai J

    2016-01-01

    Human beings have a strong tendency to imitate. Evidence from motor priming paradigms suggests that people automatically tend to imitate observed actions such as hand gestures by performing mirror-congruent movements (e.g., lifting one's right finger upon observing a left finger movement; from a mirror perspective). Many observed actions however, do not require mirror-congruent responses but afford complementary (fitting) responses instead (e.g., handing over a cup; shaking hands). Crucially, whereas mirror-congruent responses don't require physical interaction with another person, complementary actions often do. Given that most experiments studying motor priming have used stimuli devoid of contextual information, this space or interaction-dependency of complementary responses has not yet been assessed. To address this issue, we let participants perform a task in which they had to mirror or complement a hand gesture (fist or open hand) performed by an actor depicted either within or outside of reach. In three studies, we observed faster reaction times and less response errors for complementary relative to mirrored hand movements in response to open hand gestures (i.e., 'hand-shaking') irrespective of the perceived interpersonal distance of the actor. This complementary effect could not be accounted for by a low-level spatial cueing effect. These results demonstrate that humans have a strong and automatic tendency to respond by performing complementary actions. In addition, our findings underline the limitations of manipulations of space in modulating effects of motor priming and the perception of affordances.

  1. Touch and Gesture-Based Language Learning: Some Possible Avenues for Research and Classroom Practice

    Science.gov (United States)

    Reinders, Hayo

    2014-01-01

    Our interaction with digital resources is becoming increasingly based on touch, gestures, and now also eye movement. Many everyday consumer electronics products already include touch-based interfaces, from e-book readers to tablets, and from the last personal computers to the GPS system in your car. What implications do these new forms of…

  2. Control of a powered prosthetic device via a pinch gesture interface

    Science.gov (United States)

    Yetkin, Oguz; Wallace, Kristi; Sanford, Joseph D.; Popa, Dan O.

    2015-06-01

    A novel system is presented to control a powered prosthetic device using a gesture tracking system worn on a user's sound hand in order to detect different grasp patterns. Experiments are presented with two different gesture tracking systems: one comprised of Conductive Thimbles worn on each finger (Conductive Thimble system), and another comprised of a glove which leaves the fingers free (Conductive Glove system). Timing tests were performed on the selection and execution of two grasp patterns using the Conductive Thimble system and the iPhone app provided by the manufacturer. A modified Box and Blocks test was performed using Conductive Glove system and the iPhone app provided by Touch Bionics. The best prosthetic device performance is reported with the developed Conductive Glove system in this test. Results show that these low encumbrance gesture-based wearable systems for selecting grasp patterns may provide a viable alternative to EMG and other prosthetic control modalities, especially for new prosthetic users who are not trained in using EMG signals.

  3. Gestures Enhance Foreign Language Learning

    Directory of Open Access Journals (Sweden)

    Manuela Macedonia

    2012-11-01

    Full Text Available Language and gesture are highly interdependent systems that reciprocally influence each other. For example, performing a gesture when learning a word or a phrase enhances its retrieval compared to pure verbal learning. Although the enhancing effects of co-speech gestures on memory are known to be robust, the underlying neural mechanisms are still unclear. Here, we summarize the results of behavioral and neuroscientific studies. They indicate that the neural representation of words consists of complex multimodal networks connecting perception and motor acts that occur during learning. In this context, gestures can reinforce the sensorimotor representation of a word or a phrase, making it resistant to decay. Also, gestures can favor embodiment of abstract words by creating it from scratch. Thus, we propose the use of gesture as a facilitating educational tool that integrates body and mind.

  4. LAMI: A gesturally controlled three-dimensional stage Leap (Motion-based) Audio Mixing Interface

    OpenAIRE

    Wakefield, Jonathan P.; Dewey, Christopher; Gale, William

    2017-01-01

    Interface designers are increasingly exploring alternative approaches to user input/control. LAMI is a Leap (Motion-based) AMI which takes user’s hand gestures and maps these to a three-dimensional stage displayed on a computer monitor. Audio channels are visualised as spheres whose Y coordinate is spectral centroid and X and Z coordinates are controlled by hand position and represent pan and level respectively. Auxiliary send levels are controlled via wrist rotation and vertical hand positio...

  5. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition.

    Science.gov (United States)

    Benatti, Simone; Casamassima, Filippo; Milosevic, Bojan; Farella, Elisabetta; Schönle, Philipp; Fateh, Schekeb; Burger, Thomas; Huang, Qiuting; Benini, Luca

    2015-10-01

    Wearable devices offer interesting features, such as low cost and user friendliness, but their use for medical applications is an open research topic, given the limited hardware resources they provide. In this paper, we present an embedded solution for real-time EMG-based hand gesture recognition. The work focuses on the multi-level design of the system, integrating the hardware and software components to develop a wearable device capable of acquiring and processing EMG signals for real-time gesture recognition. The system combines the accuracy of a custom analog front end with the flexibility of a low power and high performance microcontroller for on-board processing. Our system achieves the same accuracy of high-end and more expensive active EMG sensors used in applications with strict requirements on signal quality. At the same time, due to its flexible configuration, it can be compared to the few wearable platforms designed for EMG gesture recognition available on market. We demonstrate that we reach similar or better performance while embedding the gesture recognition on board, with the benefit of cost reduction. To validate this approach, we collected a dataset of 7 gestures from 4 users, which were used to evaluate the impact of the number of EMG channels, the number of recognized gestures and the data rate on the recognition accuracy and on the computational demand of the classifier. As a result, we implemented a SVM recognition algorithm capable of real-time performance on the proposed wearable platform, achieving a classification rate of 90%, which is aligned with the state-of-the-art off-line results and a 29.7 mW power consumption, guaranteeing 44 hours of continuous operation with a 400 mAh battery.

  6. Stretchable Triboelectric-Photonic Smart Skin for Tactile and Gesture Sensing.

    Science.gov (United States)

    Bu, Tianzhao; Xiao, Tianxiao; Yang, Zhiwei; Liu, Guoxu; Fu, Xianpeng; Nie, Jinhui; Guo, Tong; Pang, Yaokun; Zhao, Junqing; Xi, Fengben; Zhang, Chi; Wang, Zhong Lin

    2018-04-01

    Smart skin is expected to be stretchable and tactile for bionic robots as the medium with the ambient environment. Here, a stretchable triboelectric-photonic smart skin (STPS) is reported that enables multidimensional tactile and gesture sensing for a robotic hand. With a grating-structured metal film as the bioinspired skin stripe, the STPS exhibits a tunable aggregation-induced emission in a lateral tensile range of 0-160%. Moreover, the STPS can be used as a triboelectric nanogenerator for vertical pressure sensing with a maximum sensitivity of 34 mV Pa -1 . The pressure sensing characteristics can remain stable in different stretching conditions, which demonstrates a synchronous and independent sensing property for external stimuli with great durability. By integrating on a robotic hand as a conformal covering, the STPS shows multidimensional mechanical sensing abilities for external touch and different gestures with joints bending. This work has first demonstrated a triboelectric-photonic coupled multifunctional sensing terminal, which may have great applications in human-machine interaction, soft robots, and artificial intelligence. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Hand Motion Classification Using a Multi-Channel Surface Electromyography Sensor

    Directory of Open Access Journals (Sweden)

    Dong Sun

    2012-01-01

    Full Text Available The human hand has multiple degrees of freedom (DOF for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  8. Hand motion classification using a multi-channel surface electromyography sensor.

    Science.gov (United States)

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  9. A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

    Science.gov (United States)

    Chandarana, Meghan; Trujillo, Anna; Shimada, Kenji; Allen, Danette

    2016-01-01

    The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like Earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like be able to deploy an available fleet of UAVs to fly a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

  10. Impaired imitation of meaningless gestures in ideomotor apraxia: a conceptual problem not a disorder of action control? A single case investigation.

    Science.gov (United States)

    Sunderland, Alan

    2007-04-09

    A defining characteristic of ideomotor apraxia is an inability to imitate meaningless gestures. This is widely interpreted as being due to difficulties in the formulation or execution of motor programs for complex action, but an alternative view is that there is a higher level cognitive problem in conceptualisation of the target posture. In a single case with inferior left parietal and temporal damage, severely impaired imitation was accompanied by preserved motor skill and spatial awareness but inability to make a conceptual match between the fingers of his own hand and an observed hand. Also, he was able to match pictures of visually similar gestures but not cartoon drawings of gestures which were conceptually the same but visually dissimilar. Knowledge of body structure seemed largely intact as he was only slightly inaccurate in showing correspondences between locations on drawings of a human figure and his own body, or a visually dissimilar figure. This indicated that difficulty on matching gestures was specific to representation of body posture rather than body structure, or that gesture imitation tasks place higher demand on a structural representation of the body. These data imply that for at least some cases of ideomotor apraxia, impaired gesture imitation is due to a deficit in representing the observed posture and is not a deficit in memory for action or of motor control.

  11. What makes a movement a gesture?

    Science.gov (United States)

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Dissociating Neural Correlates of Meaningful Emblems from Meaningless Gestures in Deaf Signers and Hearing Non-Signers

    Science.gov (United States)

    Husain, Fatima T.; Patkin, Debra J.; Kim, Jieun; Braun, Allen R.; Horwitz, Barry

    2012-01-01

    Emblems are meaningful, culturally-specific hand gestures that are analogous to words. In this fMRI study, we contrasted the processing of emblematic gestures with meaningless gestures by pre-lingually Deaf and hearing participants. Deaf participants, who used American Sign Language, activated bilateral auditory processing and associative areas in the temporal cortex to a greater extent than the hearing participants while processing both types of gestures relative to rest. The hearing non-signers activated a diverse set of regions, including those implicated in the mirror neuron system, such as premotor cortex (BA 6) and inferior parietal lobule (BA 40) for the same contrast. Further, when contrasting the processing of meaningful to meaningless gestures (both relative to rest), the Deaf participants, but not the hearing, showed greater response in the left angular and supramarginal gyri, regions that play important roles in linguistic processing. These results suggest that whereas the signers interpreted emblems to be comparable to words, the non-signers treated emblems as similar to pictorial descriptions of the world and engaged the mirror neuron system. PMID:22968047

  13. Fully embedded myoelectric control for a wearable robotic hand orthosis.

    Science.gov (United States)

    Ryser, Franziska; Butzer, Tobias; Held, Jeremia P; Lambercy, Olivier; Gassert, Roger

    2017-07-01

    To prevent learned non-use of the affected hand in chronic stroke survivors, rehabilitative training should be continued after discharge from the hospital. Robotic hand orthoses are a promising approach for home rehabilitation. When combined with intuitive control based on electromyography, the therapy outcome can be improved. However, such systems often require extensive cabling, experience in electrode placement and connection to external computers. This paper presents the framework for a stand-alone, fully wearable and real-time myoelectric intention detection system based on the Myo armband. The hard and software for real-time gesture classification were developed and combined with a routine to train and customize the classifier, leading to a unique ease of use. The system including training of the classifier can be set up within less than one minute. Results demonstrated that: (1) the proposed algorithm can classify five gestures with an accuracy of 98%, (2) the final system can online classify three gestures with an accuracy of 94.3% and, in a preliminary test, (3) classify three gestures from data acquired from mildly to severely impaired stroke survivors with an accuracy of over 78.8%. These results highlight the potential of the presented system for electromyography-based intention detection for stroke survivors and, with the integration of the system into a robotic hand orthosis, the potential for a wearable platform for all day robot-assisted home rehabilitation.

  14. Eye and hand movements during reconstruction of spatial memory.

    Science.gov (United States)

    Burke, Melanie R; Allen, Richard J; Gonzalez, Claudia

    2012-01-01

    Recent behavioural and biological evidence indicates common mechanisms serving working memory and attention (e.g., Awh et al, 2006 Neuroscience 139 201-208). This study explored the role of spatial attention and visual search in an adapted Corsi spatial memory task. Eye movements and touch responses were recorded from participants who recalled locations (signalled by colour or shape change) from an array presented either simultaneously or sequentially. The time delay between target presentation and recall (0, 5, or 10 s) and the number of locations to be remembered (2-5) were also manipulated. Analysis of the response phase revealed subjects were less accurate (touch data) and fixated longer (eye data) when responding to sequentially presented targets suggesting higher cognitive effort. Fixation duration on target at recall was also influenced by whether spatial location was initially signalled by colour or shape change. Finally, we found that the sequence tasks encouraged longer fixations on the signalled targets than simultaneous viewing during encoding, but no difference was observed during recall. We conclude that the attentional manipulations (colour/shape) mainly affected the eye movement parameters, whereas the memory manipulation (sequential versus simultaneous, number of items) mainly affected the performance of the hand during recall, and thus the latter is more important for ascertaining if an item is remembered or forgotten. In summary, the nature of the stimuli that is used and how it is presented play key roles in determining subject performance and behaviour during spatial memory tasks.

  15. Gesture en route to words

    DEFF Research Database (Denmark)

    Jensen de López, Kristine M.

    2010-01-01

    This study explores the communicative production of gestrural and vocal modalities by 8 normally developing children in two different cultures (Danish and Zapotec: Mexican indigenous) 16 to 20 months). We analyzed spontaneous production of gestrures and words in children's transition to the two-word...... the children showed an early preference for the gestural or vocal modality. Through Analyzes of two-element combinations of words and/or gestures, we observd a relative increase in cross-modal (gesture-word and two-word) combinations. The results are discussed in terms understanding gestures as a transition...

  16. Touch versus In-Air Hand Gestures: Evaluating the Acceptance by Seniors of Human-Robot Interaction

    NARCIS (Netherlands)

    Znagui Hassani, Anouar; van Dijk, Elisabeth M.A.G.; Ludden, Geke Dina Simone; Eertink, Henk

    2011-01-01

    Do elderly people have a preference between performing inair gestures or pressing screen buttons to interact with an assistive robot? This study attempts to provide answers to this question by measuring the level of acceptance, performance as well as knowledge of both interaction modalities during a

  17. Feasibility of interactive gesture control of a robotic microscope

    Directory of Open Access Journals (Sweden)

    Antoni Sven-Thomas

    2015-09-01

    Full Text Available Robotic devices become increasingly available in the clinics. One example are motorized surgical microscopes. While there are different scenarios on how to use the devices for autonomous tasks, simple and reliable interaction with the device is a key for acceptance by surgeons. We study, how gesture tracking can be integrated within the setup of a robotic microscope. In our setup, a Leap Motion Controller is used to track hand motion and adjust the field of view accordingly. We demonstrate with a survey that moving the field of view over a specified course is possible even for untrained subjects. Our results indicate that touch-less interaction with robots carrying small, near field gesture sensors is feasible and can be of use in clinical scenarios, where robotic devices are used in direct proximity of patient and physicians.

  18. Thirty years of great ape gestures.

    Science.gov (United States)

    Tomasello, Michael; Call, Josep

    2018-02-21

    We and our colleagues have been doing studies of great ape gestural communication for more than 30 years. Here we attempt to spell out what we have learned. Some aspects of the process have been reliably established by multiple researchers, for example, its intentional structure and its sensitivity to the attentional state of the recipient. Other aspects are more controversial. We argue here that it is a mistake to assimilate great ape gestures to the species-typical displays of other mammals by claiming that they are fixed action patterns, as there are many differences, including the use of attention-getters. It is also a mistake, we argue, to assimilate great ape gestures to human gestures by claiming that they are used referentially and declaratively in a human-like manner, as apes' "pointing" gesture has many limitations and they do not gesture iconically. Great ape gestures constitute a unique form of primate communication with their own unique qualities.

  19. The ontogenetic ritualization of bonobo gestures.

    Science.gov (United States)

    Halina, Marta; Rossano, Federico; Tomasello, Michael

    2013-07-01

    Great apes communicate with gestures in flexible ways. Based on several lines of evidence, Tomasello and colleagues have posited that many of these gestures are learned via ontogenetic ritualization-a process of mutual anticipation in which particular social behaviors come to function as intentional communicative signals. Recently, Byrne and colleagues have argued that all great ape gestures are basically innate. In the current study, for the first time, we attempted to observe the process of ontogenetic ritualization as it unfolds over time. We focused on one communicative function between bonobo mothers and infants: initiation of "carries" for joint travel. We observed 1,173 carries in ten mother-infant dyads. These were initiated by nine different gesture types, with mothers and infants using many different gestures in ways that reflected their different roles in the carry interaction. There was also a fair amount of variability among the different dyads, including one idiosyncratic gesture used by one infant. This gestural variation could not be attributed to sampling effects alone. These findings suggest that ontogenetic ritualization plays an important role in the origin of at least some great ape gestures.

  20. Kazakh Traditional Dance Gesture Recognition

    Science.gov (United States)

    Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo

    2014-04-01

    Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.

  1. Co-speech hand movements during narrations: What is the impact of right vs. left hemisphere brain damage?

    Science.gov (United States)

    Hogrefe, Katharina; Rein, Robert; Skomroch, Harald; Lausberg, Hedda

    2016-12-01

    Persons with brain damage show deviant patterns of co-speech hand movement behaviour in comparison to healthy speakers. It has been claimed by several authors that gesture and speech rely on a single production mechanism that depends on the same neurological substrate while others claim that both modalities are closely related but separate production channels. Thus, findings so far are contradictory and there is a lack of studies that systematically analyse the full range of hand movements that accompany speech in the condition of brain damage. In the present study, we aimed to fill this gap by comparing hand movement behaviour in persons with unilateral brain damage to the left and the right hemisphere and a matched control group of healthy persons. For hand movement coding, we applied Module I of NEUROGES, an objective and reliable analysis system that enables to analyse the full repertoire of hand movements independent of speech, which makes it specifically suited for the examination of persons with aphasia. The main results of our study show a decreased use of communicative conceptual gestures in persons with damage to the right hemisphere and an increased use of these gestures in persons with left brain damage and aphasia. These results not only suggest that the production of gesture and speech do not rely on the same neurological substrate but also underline the important role of right hemisphere functioning for gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. To beg, or not to beg? That is the question: mangabeys modify their production of requesting gestures in response to human's attentional states.

    Directory of Open Access Journals (Sweden)

    Audrey Maille

    Full Text Available BACKGROUND: Although gestural communication is widespread in primates, few studies focused on the cognitive processes underlying gestures produced by monkeys. METHODOLOGY/PRINCIPAL FINDINGS: The present study asked whether red-capped mangabeys (Cercocebus torquatus trained to produce visually based requesting gestures modify their gestural behavior in response to human's attentional states. The experimenter held a food item and displayed five different attentional states that differed on the basis of body, head and gaze orientation; mangabeys had to request food by extending an arm toward the food item (begging gesture. Mangabeys were sensitive, at least to some extent, to the human's attentional state. They reacted to some postural cues of a human recipient: they gestured more and faster when both the body and the head of the experimenter were oriented toward them than when they were oriented away. However, they did not seem to use gaze cues to recognize an attentive human: monkeys begged at similar levels regardless of the experimenter's eyes state. CONCLUSIONS/SIGNIFICANCE: These results indicate that mangabeys lowered their production of begging gestures when these could not be perceived by the human who had to respond to it. This finding provides important evidence that acquired begging gestures of monkeys might be used intentionally.

  3. Body language: The interplay between positional behavior and gestural signaling in the genus Pan and its implications for language evolution.

    Science.gov (United States)

    Smith, Lindsey W; Delgado, Roberto A

    2015-08-01

    The gestural repertoires of bonobos and chimpanzees are well documented, but the relationship between gestural signaling and positional behavior (i.e., body postures and locomotion) has yet to be explored. Given that one theory for language evolution attributes the emergence of increased gestural communication to habitual bipedality, this relationship is important to investigate. In this study, we examined the interplay between gestures, body postures, and locomotion in four captive groups of bonobos and chimpanzees using ad libitum and focal video data. We recorded 43 distinct manual (involving upper limbs and/or hands) and bodily (involving postures, locomotion, head, lower limbs, or feet) gestures. In both species, actors used manual and bodily gestures significantly more when recipients were attentive to them, suggesting these movements are intentionally communicative. Adults of both species spent less than 1.0% of their observation time in bipedal postures or locomotion, yet 14.0% of all bonobo gestures and 14.7% of all chimpanzee gestures were produced when subjects were engaged in bipedal postures or locomotion. Among both bonobo groups and one chimpanzee group, these were mainly manual gestures produced by infants and juvenile females. Among the other chimpanzee group, however, these were mainly bodily gestures produced by adult males in which bipedal posture and locomotion were incorporated into communicative displays. Overall, our findings reveal that bipedality did not prompt an increase in manual gesturing in these study groups. Rather, body postures and locomotion are intimately tied to many gestures and certain modes of locomotion can be used as gestures themselves. © 2015 Wiley Periodicals, Inc.

  4. Gesture in the Developing Brain

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.

    2012-01-01

    Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old…

  5. Virtual Reality Glasses and "Eye-Hands Blind Technique" for Microsurgical Training in Neurosurgery.

    Science.gov (United States)

    Choque-Velasquez, Joham; Colasanti, Roberto; Collan, Juhani; Kinnunen, Riina; Rezai Jahromi, Behnam; Hernesniemi, Juha

    2018-04-01

    Microsurgical skills and eye-hand coordination need continuous training to be developed and refined. However, well-equipped microsurgical laboratories are not so widespread as their setup is expensive. Herein, we present a novel microsurgical training system that requires a high-resolution personal computer screen, smartphones, and virtual reality glasses. A smartphone placed on a holder at a height of about 15-20 cm from the surgical target field is used as the webcam of the computer. A specific software is used to duplicate the video camera image. The video may be transferred from the computer to another smartphone, which may be connected to virtual reality glasses. Using the previously described training model, we progressively performed more and more complex microsurgical exercises. It did not take long to set up our system, thus saving time for the training sessions. Our proposed training model may represent an affordable and efficient system to improve eye-hand coordination and dexterity in using not only the operating microscope but also endoscopes and exoscopes. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Effects of lips and hands on auditory learning of second-language speech sounds.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D

    2010-04-01

    Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

  7. Music Conductor Gesture Recognized Interactive Music Generation System

    OpenAIRE

    CHEN, Shuai; MAEDA, Yoichiro; TAKAHASHI, Yasutake

    2012-01-01

    In the research of interactive music generation, we propose a music generation method, that the computer generates the music automatically, and then the music will be arranged under the human music conductor's gestures, before it outputs to us. In this research, the generated music is processed from chaotic sound, which is generated from the network of chaotic elements in realtime. The music conductor's hand motions are detected by Microsoft Kinect in this system. Music theories are embedded ...

  8. Using the Hand to Choreograph Instruction: On the Functional Role of Gesture in Definition Talk

    Science.gov (United States)

    Belhiah, Hassan

    2013-01-01

    This article examines the coordination of speech and gesture in teachers' definition talk, that is, vocabulary explanations addressed to language learners. By analyzing one ESL teacher's spoken definitions, the study demonstrates in the details of the unfolding talk how a teacher crafts and choreographs his definitions moment by moment, while…

  9. DISABILITIES OF HANDS, FEET AND EYES IN NEWLY-DIAGNOSED LEPROSY PATIENTS IN EASTERN NEPAL

    NARCIS (Netherlands)

    SCHIPPER, A; LUBBERS, WJ; HOGEWEG, M; DESOLDENHOFF, R

    The objective of the study was to determine the magnitude of hand/feet/eye disabilities in newly diagnosed leprosy patients by examining all newly diagnosed leprosy patients who presented at the Eastern Leprosy Control Project (supported by The Netherlands Leprosy Relief Association), made up of a

  10. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour.

    Science.gov (United States)

    Özyürek, Aslı

    2014-09-19

    As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. 2D Hand Tracking Based on Flocking with Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Zihong Chen

    2014-02-01

    Full Text Available Hand gesture-based interaction provides a natural and powerful means for human-computer interaction. It is also a good interface for human-robot interaction. However, most of the existing proposals are likely to fail when they meet some skin-coloured objects, especially the face region. In this paper, we present a novel hand tracking method which can track the features of the hand based on the obstacle avoidance flocking behaviour model to overcome skin-coloured distractions. It allows features to be split into two groups under severe distractions and merge later. The experiment results show that our method can track the hand in a cluttered background or when passing the face, while the Flocking of Features (FoF and the Mean Shift Embedded Particle Filter (MSEPF methods may fail. These results suggest that our method has better performance in comparison with the previous methods. It may therefore be helpful to promote the use of the hand gesture-based human-robot interaction method.

  12. The beneficial effect of a speaker's gestures on the listener's memory for action phrases: The pivotal role of the listener's premotor cortex.

    Science.gov (United States)

    Ianì, Francesco; Burin, Dalila; Salatino, Adriana; Pia, Lorenzo; Ricci, Raffaella; Bucciarelli, Monica

    2018-04-10

    Memory for action phrases improves in the listeners when the speaker accompanies them with gestures compared to when the speaker stays still. Since behavioral studies revealed a pivotal role of the listeners' motor system, we aimed to disentangle the role of primary motor and premotor cortices. Participants had to recall phrases uttered by a speaker in two conditions: in the gesture condition, the speaker performed gestures congruent with the action; in the no-gesture condition, the speaker stayed still. In Experiment 1, half of the participants underwent inhibitory rTMS over the hand/arm region of the left premotor cortex (PMC) and the other half over the hand/arm region of the left primary motor cortex (M1). The enactment effect disappeared only following rTMS over PMC. In Experiment 2, we detected the usual enactment effect after rTMS over vertex, thereby excluding possible nonspecific rTMS effects. These findings suggest that the information encoded in the premotor cortex is a crucial part of the memory trace. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Initial experiments with Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Graugaard, Lars

    2005-01-01

    The classic orchestra has a diminishing role in society, while hard-disc recorded music plays a predominant role today. A simple to use pointer interface in 2D for producing music is presented as a means for playing in a social situation. The sounds of the music are produced by a low-level...... synthesizer, and the music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several...... other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence is finalized instantly with one upward gesture. The synthesis employs a novel interface structure, the multiple musical gesture...

  14. Dissociating neural correlates of meaningful emblems from meaningless gestures in deaf signers and hearing non-signers.

    Science.gov (United States)

    Husain, Fatima T; Patkin, Debra J; Kim, Jieun; Braun, Allen R; Horwitz, Barry

    2012-10-10

    Emblems are meaningful, culturally-specific hand gestures that are analogous to words. In this fMRI study, we contrasted the processing of emblematic gestures with meaningless gestures by pre-lingually Deaf and hearing participants. Deaf participants, who used American Sign Language, activated bilateral auditory processing and associative areas in the temporal cortex to a greater extent than the hearing participants while processing both types of gestures relative to rest. The hearing non-signers activated a diverse set of regions, including those implicated in the mirror neuron system, such as premotor cortex (BA 6) and inferior parietal lobule (BA 40) for the same contrast. Further, when contrasting the processing of meaningful to meaningless gestures (both relative to rest), the Deaf participants, but not the hearing, showed greater response in the left angular and supramarginal gyri, regions that play important roles in linguistic processing. These results suggest that whereas the signers interpreted emblems to be comparable to words, the non-signers treated emblems as similar to pictorial descriptions of the world and engaged the mirror neuron system. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Gesturing more diminishes recall of abstract words when gesture is allowed and concrete words when it is taboo.

    Science.gov (United States)

    Matthews-Saugstad, Krista M; Raymakers, Erik P; Kelty-Stephen, Damian G

    2017-07-01

    Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.

  16. Gestures and Insight in Advanced Mathematical Thinking

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding--in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities.…

  17. Eye Contact Is Crucial for Referential Communication in Pet Dogs.

    Science.gov (United States)

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.

  18. Research on Interaction-oriented Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Lu Huang

    2014-01-01

    Full Text Available This thesis designs a series of gesture interaction with the features of the natural human-machine interaction; besides, it utilizes the 3D acceleration sensors as interactive input. Afterwards, it builds the Discrete Hidden Markov Model to make gesture recognition by introducing the collection proposal of gesture interaction based on the acceleration sensors and pre-handling the gesture acceleration signal obtained in the collection. In the end, the thesis proofs the design proposal workable and effective according to the experiments.

  19. Development of Pointing Gestures in Children with Typical and Delayed Language Acquisition

    Science.gov (United States)

    Lüke, Carina; Ritterfeld, Ute; Grimminger, Angela; Liszkowski, Ulf; Rohlfing, Katharina J.

    2017-01-01

    Purpose: This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the…

  20. Sequence for the Training of Eye-Hand Coordination Needed for the Organization of Handwriting Tasks

    Science.gov (United States)

    Trester, Mary Fran

    1971-01-01

    Suggested is a sequence of 11 class activities, progressing from gross to fine motor skills, to assist the development of skills required to perform handwriting tasks successfully, for use particularly with children who lack fine motor control and eye-hand coordination. (KW)

  1. Individual differences in the gesture effect on working memory.

    Science.gov (United States)

    Marstaller, Lars; Burianová, Hana

    2013-06-01

    Co-speech gestures have been shown to interact with working memory (WM). However, no study has investigated whether there are individual differences in the effect of gestures on WM. Combining a novel gesture/no-gesture task and an operation span task, we examined the differences in WM accuracy between individuals who gestured and individuals who did not gesture in relation to their WM capacity. Our results showed individual differences in the gesture effect on WM. Specifically, only individuals with low WM capacity showed a reduced WM accuracy when they did not gesture. Individuals with low WM capacity who did gesture, as well as high-capacity individuals (irrespective of whether they gestured or not), did not show the effect. Our findings show that the interaction between co-speech gestures and WM is affected by an individual's WM load.

  2. Eliminating drift of the head gesture reference to enhance Google Glass-based control of an NAO humanoid robot

    Directory of Open Access Journals (Sweden)

    Xiaoqian Mao

    2017-03-01

    Full Text Available This article presents a strategy for hand-free control of an NAO humanoid robot via head gesture detected by Google Glass-based multi-sensor fusion. First, we introduce a Google Glass-based robot system by integrating the Google Glass and the NAO humanoid robot, which is able to send robot commands through Wi-Fi communications between the Google Glass and the robot. Second, we detect the operator’s head gestures by processing data from multiple sensors including accelerometers, geomagnetic sensors and gyroscopes. Next, we use a complementary filter to eliminate drift of the head gesture reference, which greatly improves the control performance. This is accomplished by the high-pass filter component on the control signal. Finally, we conduct obstacle avoidance experiments while navigating the robot to validate the effectiveness and reliability of this system. The experimental results show that the robot is smoothly navigated from its initial position to its destination with obstacle avoidance via the Google Glass. This hands-free control system can benefit those with paralysed limbs.

  3. Two-Polarisation Physical Model of Bowed Strings with Nonlinear Contact and Friction Forces, and Application to Gesture-Based Sound Synthesis

    Directory of Open Access Journals (Sweden)

    Charlotte Desvages

    2016-05-01

    Full Text Available Recent bowed string sound synthesis has relied on physical modelling techniques; the achievable realism and flexibility of gestural control are appealing, and the heavier computational cost becomes less significant as technology improves. A bowed string sound synthesis algorithm is designed, by simulating two-polarisation string motion, discretising the partial differential equations governing the string’s behaviour with the finite difference method. A globally energy balanced scheme is used, as a guarantee of numerical stability under highly nonlinear conditions. In one polarisation, a nonlinear contact model is used for the normal forces exerted by the dynamic bow hair, left hand fingers, and fingerboard. In the other polarisation, a force-velocity friction curve is used for the resulting tangential forces. The scheme update requires the solution of two nonlinear vector equations. The dynamic input parameters allow for simulating a wide range of gestures; some typical bow and left hand gestures are presented, along with synthetic sound and video demonstrations.

  4. Gestures for Picture Archiving and Communication Systems (PACS) operation in the operating room: Is there any standard?

    Science.gov (United States)

    Madapana, Naveen; Gonzalez, Glebys; Rodgers, Richard; Zhang, Lingsong; Wachs, Juan P

    2018-01-01

    Gestural interfaces allow accessing and manipulating Electronic Medical Records (EMR) in hospitals while keeping a complete sterile environment. Particularly, in the Operating Room (OR), these interfaces enable surgeons to browse Picture Archiving and Communication System (PACS) without the need of delegating functions to the surgical staff. Existing gesture based medical interfaces rely on a suboptimal and an arbitrary small set of gestures that are mapped to a few commands available in PACS software. The objective of this work is to discuss a method to determine the most suitable set of gestures based on surgeon's acceptability. To achieve this goal, the paper introduces two key innovations: (a) a novel methodology to incorporate gestures' semantic properties into the agreement analysis, and (b) a new agreement metric to determine the most suitable gesture set for a PACS. Three neurosurgical diagnostic tasks were conducted by nine neurosurgeons. The set of commands and gesture lexicons were determined using a Wizard of Oz paradigm. The gestures were decomposed into a set of 55 semantic properties based on the motion trajectory, orientation and pose of the surgeons' hands and their ground truth values were manually annotated. Finally, a new agreement metric was developed, using the known Jaccard similarity to measure consensus between users over a gesture set. A set of 34 PACS commands were found to be a sufficient number of actions for PACS manipulation. In addition, it was found that there is a level of agreement of 0.29 among the surgeons over the gestures found. Two statistical tests including paired t-test and Mann Whitney Wilcoxon test were conducted between the proposed metric and the traditional agreement metric. It was found that the agreement values computed using the former metric are significantly higher (p operation is higher than the previously reported metric (0.29 vs 0.13). This observation is based on the fact that the agreement focuses on main

  5. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension.

    Science.gov (United States)

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-05-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  6. Intermediate view synthesis for eye-gazing

    Science.gov (United States)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  7. Eye and hand motor interactions with the Symbol Digit Modalities Test in early multiple sclerosis.

    Science.gov (United States)

    Nygaard, Gro O; de Rodez Benavent, Sigrid A; Harbo, Hanne F; Laeng, Bruno; Sowa, Piotr; Damangir, Soheil; Bernhard Nilsen, Kristian; Etholm, Lars; Tønnesen, Siren; Kerty, Emilia; Drolsum, Liv; Inge Landrø, Nils; Celius, Elisabeth G

    2015-11-01

    Eye and hand motor dysfunction may be present early in the disease course of relapsing-remitting multiple sclerosis (RRMS), and can affect the results on visual and written cognitive tests. We aimed to test for differences in saccadic initiation time (SI time) between RRMS patients and healthy controls, and whether SI time and hand motor speed interacted with the written version of the Symbol Digit Modalities Test (wSDMT). Patients with RRMS (N = 44, age 35.1 ± 7.3 years), time since diagnosis < 3 years and matched controls (N = 41, age 33.2 ± 6.8 years) were examined with ophthalmological, neurological and neuropsychological tests, as well as structural MRI (white matter lesion load (WMLL) and brainstem lesions), visual evoked potentials (VEP) and eye-tracker examinations of saccades. SI time was longer in RRMS than controls (p < 0.05). SI time was not related to the Paced Auditory Serial Addition Test (PASAT), WMLL or to the presence of brainstem lesions. 9 hole peg test (9HP) correlated significantly with WMLL (r = 0.58, p < 0.01). Both SI time and 9HP correlated negatively with the results of wSDMT (r = -0.32, p < 0.05, r = -0.47, p < 0.01), but none correlated with the results of PASAT. RRMS patients have an increased SI time compared to controls. Cognitive tests results, exemplified by the wSDMT, may be confounded by eye and hand motor function. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Delayed Stimulus-Specific Improvements in Discourse Following Anomia Treatment Using an Intentional Gesture

    Science.gov (United States)

    Altmann, Lori J. P.; Hazamy, Audrey A.; Carvajal, Pamela J.; Benjamin, Michelle; Rosenbek, John C.; Crosson, Bruce

    2014-01-01

    Purpose: In this study, the authors assessed how the addition of intentional left-hand gestures to an intensive treatment for anomia affects 2 types of discourse: picture description and responses to open-ended questions. Method: Fourteen people with aphasia completed treatment for anomia comprising 30 treatment sessions over 3 weeks. Seven…

  9. [A case with apraxia of tool use: selective inability to form a hand posture for a tool].

    Science.gov (United States)

    Hayakawa, Yuko; Fujii, Toshikatsu; Yamadori, Atsushi; Meguro, Kenichi; Suzuki, Kyoko

    2015-03-01

    Impaired tool use is recognized as a symptom of ideational apraxia. While many studies have focused on difficulties in producing gestures as a whole, using tools involves several steps; these include forming hand postures appropriate for the use of certain tool, selecting objects or body parts to act on, and producing gestures. In previously reported cases, both producing and recognizing hand postures were impaired. Here we report the first case showing a selective impairment of forming hand postures appropriate for tools with preserved recognition of the required hand postures. A 24-year-old, right-handed man was admitted to hospital because of sensory impairment of the right side of the body, mild aphasia, and impaired tool use due to left parietal subcortical hemorrhage. His ability to make symbolic gestures, copy finger postures, and orient his hand to pass a slit was well preserved. Semantic knowledge for tools and hand postures was also intact. He could flawlessly select the correct hand postures in recognition tasks. He only demonstrated difficulties in forming a hand posture appropriate for a tool. Once he properly grasped a tool by trial and error, he could use it without hesitation. These observations suggest that each step of tool use should be thoroughly examined in patients with ideational apraxia.

  10. Spontaneous gestures influence strategy choices in problem solving.

    Science.gov (United States)

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  11. Gestures of grieving and mourning: a transhistoric dance-scheme

    OpenAIRE

    Briand , Michel

    2013-01-01

    International audience; This short analysis refers to cultural anthropology and aesthetics of dance, and intends to present a few remarkable steps in the long history of a special kind of danced gestures: expressions of feelings and representations of activities related to grieving and mourning, like lifting up hands in the air or upon one’s head and dramatically waving long hair. The focus is set on some universals and similarities as well as on contextualized variations and differences, in ...

  12. The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture.

    Science.gov (United States)

    Perniss, Pamela; Özyürek, Asli; Morgan, Gary

    2015-01-01

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.

  13. TOT phenomena: Gesture production in younger and older adults.

    Science.gov (United States)

    Theocharopoulou, Foteini; Cocks, Naomi; Pring, Timothy; Dipper, Lucy T

    2015-06-01

    This study explored age-related changes in gesture to better understand the relationship between gesture and word retrieval from memory. The frequency of gestures during tip-of-the-tongue (TOT) states highlights this relationship. There is a lack of evidence describing the form and content of iconic gestures arising spontaneously in such TOT states and a parallel gap addressing age-related variations. In this study, TOT states were induced in 45 participants from 2 age groups (older and younger adults) using a pseudoword paradigm. The type and frequency of gestures produced was recorded during 2 experimental conditions (single-word retrieval and narrative task). We found that both groups experienced a high number of TOT states, during which they gestured. Iconic co-TOT gestures were more common than noniconic gestures. Although there was no age effect on the type of gestures produced, there was a significant, task-specific age difference in the amount of gesturing. That is, younger adults gestured more in the narrative task, whereas older adults generated more gestures in the single-word-retrieval task. Task-specific age differences suggest that there are age-related differences in terms of the cognitive operations involved in TOT gesture production. (c) 2015 APA, all rights reserved.

  14. Musical Shaping Gestures: Considerations about Terminology and Methodology

    Directory of Open Access Journals (Sweden)

    Elaine King

    2013-12-01

    Full Text Available Fulford and Ginsborg's investigation into non-verbal communication during music rehearsal-talk between performers with and without hearing impairments extends existing research in the field of gesture studies by contributing significantly to our understanding of musicians' physical gestures as well as opening up discussion about the relationship between speech, sign and gesture in discourse about music. Importantly, the authors weigh up the possibility of an emerging sign language about music. This commentary focuses on three key considerations in response to their paper: first, use of terminology in the study of gesture, specifically about 'musical shaping gestures' (MSGs; second, methodological issues about capturing physical gestures; and third, evaluation of the application of gesture research beyond the rehearsal context. While the difficulties of categorizing gestures in observational research are acknowledged, I indicate that the consistent application of terminology from outside and within the study is paramount. I also suggest that the classification of MSGs might be based upon a set of observed physical characteristics within a single gesture, including size, duration, speed, plane and handedness, leading towards an alternative taxonomy for interpreting these data. Finally, evaluation of the application of gesture research in education and performance arenas is provided.

  15. Introduction: Towards an Ethics of Gesture

    Directory of Open Access Journals (Sweden)

    Lucia Ruprecht

    2017-06-01

    Full Text Available The introduction to this special section of Performance Philosophy takes Giorgio Agamben’s remarks about the mediality and potentiality of gesture as a starting point to rethink gesture’s nexus with ethics. Shifting the emphasis from philosophical reflection to corporeal practice, it defines gestural ethics as an acting-otherwise which comes into being in the particularities of singular gestural practice, its forms, kinetic qualities, temporal displacements and calls for response. Gestural acting-otherwise is illustrated in a number of ways: We might talk of a gestural ethics when gesturality becomes an object for dedicated analytical exploration and reflection on sites where it is not taken for granted, but exhibited, on stage or on screen, in its mediality, in the ways it quotes, signifies and departs from signification, but also in the ways in which it follows a forward-looking agenda driven by adaptability and inventiveness. It interrupts or modifies operative continua that might be geared towards violence; it appears in situations that are suspended between the possibility of malfunction and the potential of room for play; and it emerges in the ways in which gestures act on their own implication in the signifying structures of gender, sexuality, race, and class, on how these structures play out relationally across time and space, and between historically and locally situated human beings.

  16. Gesture and Power

    OpenAIRE

    Covington-Ward, Yolanda

    2016-01-01

    In Gesture and Power Yolanda Covington-Ward examines the everyday embodied practices and performances of the BisiKongo people of the lower Congo to show how their gestures, dances, and spirituality are critical in mobilizing social and political action. Conceiving of the body as the center of analysis, a catalyst for social action, and as a conduit for the social construction of reality, Covington-Ward focuses on specific flashpoints in the last ninety years of Congo's troubled history, when ...

  17. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. None. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  19. Gesture Modelling for Linguistic Purposes

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2007-05-01

    Full Text Available The study of sign languages attempts to create a coherent model that binds the expressive nature of signs conveyed in gestures to a linguistic framework. Gesture modelling offers an alternative that provides device independence, scalability...

  20. Ethics, Gesture and the Western

    Directory of Open Access Journals (Sweden)

    Michael Minden

    2017-06-01

    Full Text Available This paper relates the Western Movie to Agamben’s implied gestural zone between intention and act. Film is important in the realisation of this zone because it was the first means of representation to capture the body in movement. The Western movie explores the space of ethical indistinction between the acts of individual fighters and the establishment of a rule of law, or putting this another way, between violence and justice. Two classic examples of an archetypal Western plot (Shane, 1953 and Unforgiven, 1991 that particularly embodies this are cited. In both a gunfighter who has forsworn violence at the start is led by the circumstances of the plot to take it up once more at the conclusion. In these terms all the gestures contained between these beginning- and end-points are analysable as an ethics of gesture because, captured as gestures, they occupy the human space between abstraction and action, suspended between them, and reducible to neither.  David Foster Wallace's definition of this narrative arc in Infinite Jest (and embodied in it is adduced in order to suggest a parallel between Agamben's notion of an ethics of gesture, and an ethics of genre.

  1. Hand-eye calibration for rigid laparoscopes using an invariant point.

    Science.gov (United States)

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  2. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    Science.gov (United States)

    Noah, J Adam; Dravida, Swethasri; Zhang, Xian; Yahil, Shaul; Hirsch, Joy

    2017-01-01

    The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no") were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left

  3. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    Directory of Open Access Journals (Sweden)

    J Adam Noah

    Full Text Available The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no" were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down, or head shaking (side-to-side, thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS were increased in the right dorsolateral prefrontal cortex (DLPFC for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ, superior temporal gyrus (STG, supramarginal gyrus (SMG, and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to

  4. High gamma oscillations in medial temporal lobe during overt production of speech and gestures.

    Science.gov (United States)

    Marstaller, Lars; Burianová, Hana; Sowman, Paul F

    2014-01-01

    The study of the production of co-speech gestures (CSGs), i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15-25 Hz), which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50-90 Hz), which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.

  5. High gamma oscillations in medial temporal lobe during overt production of speech and gestures.

    Directory of Open Access Journals (Sweden)

    Lars Marstaller

    Full Text Available The study of the production of co-speech gestures (CSGs, i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15-25 Hz, which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50-90 Hz, which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.

  6. Effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy.

    Science.gov (United States)

    Shin, Ji-Won; Song, Gui-Bin; Hwangbo, Gak

    2015-07-01

    [Purpose] The purpose of the study was to evaluate the effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy. [Subjects] Sixteen children (9 males, 7 females) with spastic diplegic cerebral palsy were recruited and randomly assigned to the conventional neurological physical therapy group (CG) and virtual reality training group (VRG). [Methods] Eight children in the control group performed 45 minutes of therapeutic exercise twice a week for eight weeks. In the experimental group, the other eight children performed 30 minutes of therapeutic exercise and 15 minutes of a training program using virtual reality twice a week during the experimental period. [Results] After eight weeks of the training program, there were significant differences in eye-hand coordination and visual motor speed in the comparison of the virtual reality training group with the conventional neurological physical therapy group. [Conclusion] We conclude that a well-designed training program using virtual reality can improve eye-hand coordination in children with cerebral palsy.

  7. Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation.

    Science.gov (United States)

    Yu, Ningbo; Xu, Chang; Li, Huanshuai; Wang, Kui; Wang, Liancheng; Liu, Jingtai

    2016-03-18

    Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs), and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training.

  8. Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation

    Directory of Open Access Journals (Sweden)

    Ningbo Yu

    2016-03-01

    Full Text Available Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs, and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training.

  9. Young Children Create Iconic Gestures to Inform Others

    Science.gov (United States)

    Behne, Tanya; Carpenter, Malinda; Tomasello, Michael

    2014-01-01

    Much is known about young children's use of deictic gestures such as pointing. Much less is known about their use of other types of communicative gestures, especially iconic or symbolic gestures. In particular, it is unknown whether children can create iconic gestures on the spot to inform others. Study 1 provided 27-month-olds with the…

  10. Releasing the constraints on aphasia therapy: the positive impact of gesture and multimodality treatments.

    Science.gov (United States)

    Rose, Miranda L

    2013-05-01

    There is a 40-year history of interest in the use of arm and hand gestures in treatments that target the reduction of aphasic linguistic impairment and compensatory methods of communication (Rose, 2006). Arguments for constraining aphasia treatment to the verbal modality have arisen from proponents of constraint-induced aphasia therapy (Pulvermüller et al., 2001). Confusion exists concerning the role of nonverbal treatments in treating people with aphasia. The central argument of this paper is that given the state of the empirical evidence and the strong theoretical accounts of modality interactions in human communication, gesture-based and multimodality aphasia treatments are at least as legitimate an option as constraint-based aphasia treatment. Theoretical accounts of modality interactions in human communication and the gesture production abilities of individuals with aphasia that are harnessed in treatments are reviewed. The negative effects on word retrieval of restricting gesture production are also reviewed, and an overview of the neurological architecture subserving language processing is provided as rationale for multimodality treatments. The evidence for constrained and unconstrained treatments is critically reviewed. Together, these data suggest that constraint treatments and multimodality treatments are equally efficacious, and there is limited support for constraining client responses to the spoken modality.

  11. Releasing the Constraints on Aphasia Therapy: The Positive Impact of Gesture and Multimodality Treatments

    Science.gov (United States)

    Rose, Miranda L.

    2013-01-01

    Purpose: There is a 40-year history of interest in the use of arm and hand gestures in treatments that target the reduction of aphasic linguistic impairment and compensatory methods of communication (Rose, 2006). Arguments for constraining aphasia treatment to the verbal modality have arisen from proponents of constraint-induced aphasia therapy…

  12. Explaining dog wolf differences in utilizing human pointing gestures: selection for synergistic shifts in the development of some social skills.

    Science.gov (United States)

    Gácsi, Márta; Györi, Borbála; Gyoöri, Borbála; Virányi, Zsófia; Kubinyi, Enikö; Range, Friederike; Belényi, Beatrix; Miklósi, Adám

    2009-08-28

    The comparison of human related communication skills of socialized canids may help to understand the evolution and the epigenesis of gesture comprehension in humans. To reconcile previously contradicting views on the origin of dogs' outstanding performance in utilizing human gestures, we suggest that dog-wolf differences should be studied in a more complex way. We present data both on the performance and the behaviour of dogs and wolves of different ages in a two-way object choice test. Characteristic behavioural differences showed that for wolves it took longer to establish eye contact with the pointing experimenter, they struggled more with the handler, and pups also bit her more before focusing on the human's signal. The performance of similarly hand-reared 8-week-old dogs and wolves did not differ in utilizing the simpler proximal momentary pointing. However, when tested with the distal momentary pointing, 4-month-old pet dogs outperformed the same aged hand reared wolves. Thus early and intensive socialisation does not diminish differences between young dogs and wolves in behaviour and performance. Socialised adult wolves performed similarly well as dogs in this task without pretraining. The success of adult wolves was accompanied with increased willingness to cooperate. Thus, we provide evidence for the first time that socialised adult wolves are as successful in relying on distal momentary pointing as adult pet dogs. However, the delayed emergence of utilising human distal momentary pointing in wolves shows that these wild canines react to a lesser degree to intensive socialisation in contrast to dogs, which are able to control agonistic behaviours and inhibition of actions in a food related task early in development. We suggest a "synergistic" hypothesis, claiming that positive feedback processes (both evolutionary and epigenetic) have increased the readiness of dogs to attend to humans, providing the basis for dog-human communication.

  13. Gesture analysis for physics education researchers

    Directory of Open Access Journals (Sweden)

    Rachel E. Scherr

    2008-01-01

    Full Text Available Systematic observations of student gestures can not only fill in gaps in students’ verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.

  14. Enhancement of naming in nonfluent aphasia through gesture.

    Science.gov (United States)

    Hanlon, R E; Brown, J W; Gerstman, L J

    1990-02-01

    In a number of studies that have examined the gestural disturbance in aphasia and the utility of gestural interventions in aphasia therapy, a variable degree of facilitation of verbalization during gestural activity has been reported. The present study examined the effect of different unilateral gestural movements on simultaneous oral-verbal expression, specifically naming to confrontation. It was hypothesized that activation of the phylogenetically older proximal motor system of the hemiplegic right arm in the execution of a communicative but nonrepresentational pointing gesture would have a facilitatory effect on naming ability. Twenty-four aphasic patients, representing five aphasic subtypes, including Broca's, Transcortical Motor, Anomic, Global, and Wernicke's aphasics were assessed under three gesture/naming conditions. The findings indicated that gestures produced through activation of the proximal (shoulder) musculature of the right paralytic limb differentially facilitated naming performance in the nonfluent subgroup, but not in the Wernicke's aphasics. These findings may be explained on the view that functional activation of the archaic proximal motor system of the hemiplegic limb, in the execution of a communicative gesture, permits access to preliminary stages in the formative process of the anterior action microgeny, which ultimately emerges in vocal articulation.

  15. Neural correlates of gesture processing across human development.

    Science.gov (United States)

    Wakefield, Elizabeth M; James, Thomas W; James, Karin H

    2013-01-01

    Co-speech gesture facilitates learning to a greater degree in children than in adults, suggesting that the mechanisms underlying the processing of co-speech gesture differ as a function of development. We suggest that this may be partially due to children's lack of experience producing gesture, leading to differences in the recruitment of sensorimotor networks when comparing adults to children. Here, we investigated the neural substrates of gesture processing in a cross-sectional sample of 5-, 7.5-, and 10-year-old children and adults and focused on relative recruitment of a sensorimotor system that included the precentral gyrus (PCG) and the posterior middle temporal gyrus (pMTG). Children and adults were presented with videos in which communication occurred through different combinations of speech and gesture during a functional magnetic resonance imaging (fMRI) session. Results demonstrated that the PCG and pMTG were recruited to different extents in the two populations. We interpret these novel findings as supporting the idea that gesture perception (pMTG) is affected by a history of gesture production (PCG), revealing the importance of considering gesture processing as a sensorimotor process.

  16. Development of an evaluation function for eye-hand coordination robotic therapy.

    Science.gov (United States)

    Pernalete, N; Tang, F; Chang, S M; Cheng, F Y; Vetter, P; Stegemann, M; Grantner, J

    2011-01-01

    This paper is the continuation of a work presented at ICORR 07, in which we discussed the possibility of improving eye-hand coordination in children diagnosed with this problem, using a robotic mapping from a haptic user interface to a virtual environment. Our goal is to develop, implement and refine a system that will assess and improve the eye-hand coordination and grip strength in children diagnosed with poor graphomotor skills. A detailed analysis of patters (e.g., labyrinths, letters and angles) was conducted in order to select three very distinguishable levels of difficulty that could be included in the system, and which would yield the greatest benefit in terms of assessment of coordination and strength issues as well as in training. Support algorithms (position, force, velocity, inertia and viscosity) were also developed and incorporated into the tasks in order to introduce general computer assistance to the mapping of the user's movements to the computer screen without overriding the user's commands to the robotic device. In order to evaluate performance (given by %accuracy and time) of the executed tasks, a sophisticated evaluation function was designed based on image analysis and edge detection algorithms. This paper presents the development of the haptic tasks, the various assistance algorithms, the description of the evaluation function and the results of a study implemented at the Motor Development Clinic at Cal Poly Pomona. The results (Accuracy and Time) of this function are currently being used as inputs to an Intelligent Decision Support System (described in), which in turn, suggests the next task to be executed by the subject based on his/her performance. © 2011 IEEE

  17. Personalized gesture interactions for cyber-physical smart-home environments

    Institute of Scientific and Technical Information of China (English)

    Yihua LOU; Wenjun WU; Radu-Daniel VATAVU; Wei-Tek TSAI

    2017-01-01

    A gesture-based interaction system for smart homes is a part of a complex cyber-physical environment,for which researchers and developers need to address major challenges in providing personalized gesture interactions.However,current research efforts have not tackled the problem of personalized gesture recognition that often involves user identification.To address this problem,we propose in this work a new event-driven service-oriented framework called gesture services for cyber-physical environments (GS-CPE) that extends the architecture of our previous work gesture profile for web services (GPWS).To provide user identification functionality,GS-CPE introduces a two-phase cascading gesture password recognition algorithm for gesture-based user identification using a two-phase cascading classifier with the hidden Markov model and the Golden Section Search,which achieves an accuracy rate of 96.2% with a small training dataset.To support personalized gesture interaction,an enhanced version of the Dynamic Time Warping algorithm with multiple gestural input sources and dynamic template adaptation support is implemented.Our experimental results demonstrate the performance of the algorithm can achieve an average accuracy rate of 98.5% in practical scenarios.Comparison results reveal that GS-CPE has faster response time and higher accuracy rate than other gesture interaction systems designed for smart-home environments.

  18. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    Directory of Open Access Journals (Sweden)

    Dhana Wolf

    2017-11-01

    Full Text Available Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake or less so (e.g., self-grooming. We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area and the posterior superior temporal gyrus (pSTG, Wernicke's area and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC in fMRI even without involving a stimulus (model-free analysis. The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations. Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  19. Comprehensibility and neural substrate of communicative gestures in severe aphasia.

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2017-08-01

    Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Cross-cultural variation of speech-accompanying gesture : a review

    OpenAIRE

    Kita, Sotaro

    2009-01-01

    This article reviews the literature on cross-cultural variation of gestures. Four factors governing the variation were identified. The first factor is the culture-specific convention for form-meaning associations. This factor is involved in well-known cross-cultural differences in emblem gestures (e.g., the OK-sign), as well as pointing gestures. The second factor is culture-specific spatial cognition. Representational gestures (i.e., iconic and deictic gestures) that express spatial contents...

  1. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  2. Universal brain systems for recognizing word shapes and handwriting gestures during reading.

    Science.gov (United States)

    Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J L; Dehaene, Stanislas

    2012-12-11

    Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner's area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies.

  3. Communicative Gestures Facilitate Problem Solving for Both Communicators and Recipients

    Science.gov (United States)

    Lozano, Sandra C.; Tversky, Barbara

    2006-01-01

    Gestures are a common, integral part of communication. Here, we investigate the roles of gesture and speech in explanations, both for communicators and recipients. Communicators explained how to assemble a simple object, using either speech with gestures or gestures alone. Gestures used for explaining included pointing and exhibiting to indicate…

  4. Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    OpenAIRE

    Ferré , Gaëlle; Mark , Tutton

    2015-01-01

    International audience; The fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and...

  5. Adult Gesture in Collaborative Mathematics Reasoning in Different Ages

    Science.gov (United States)

    Noto, M. S.; Harisman, Y.; Harun, L.; Amam, A.; Maarif, S.

    2017-09-01

    This article describes the case study on postgraduate students by using descriptive method. A problem is designed to facilitate the reasoning in the topic of Chi-Square test. The problem was given to two male students with different ages to investigate the gesture pattern and it will be related to their reasoning process. The indicators in reasoning problem can obtain the conclusion of analogy and generalization, and arrange the conjectures. This study refers to some questions—whether unique gesture is for every individual or to identify the pattern of the gesture used by the students with different ages. Reasoning problem was employed to collect the data. Two students were asked to collaborate to reason the problem. The discussion process recorded in using video tape to observe the gestures. The video recorded are explained clearly in this writing. Prosodic cues such as time, conversation text, gesture that appears, might help in understanding the gesture. The purpose of this study is to investigate whether different ages influences the maturity in collaboration observed from gesture perspective. The finding of this study shows that age is not a primary factor that influences the gesture in that reasoning process. In this case, adult gesture or gesture performed by order student does not show that he achieves, maintains, and focuses on the problem earlier on. Adult gesture also does not strengthen and expand the meaning if the student’s words or the language used in reasoning is not familiar for younger student. Adult gesture also does not affect cognitive uncertainty in mathematics reasoning. The future research is suggested to take more samples to find the consistency from that statement.

  6. Gesturing by Speakers with Aphasia: How Does It Compare?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-01-01

    Purpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. Method: The informativeness of gesture was assessed in 3…

  7. Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation

    NARCIS (Netherlands)

    Barth, Ruud; Hemming, Jochen; Henten, van E.J.

    2016-01-01

    A modular software framework design that allows flexible implementation of eye-in-hand sensing and motion control for agricultural robotics in dense vegetation is reported. Harvesting robots in cultivars with dense vegetation require multiple viewpoints and on-line trajectory adjustments in order

  8. Pilot Study: The Role of the Hemispheric Lateralization in Mental Disorders by Use of the Limb (Eye, Hand, Foot) Dominance.

    Science.gov (United States)

    Goodarzi, Naser; Dabbaghi, Parviz; Valipour, Habib; Vafadari, Behnam

    2015-04-01

    Based on the previous studies, we know that the hemispheric lateralization defects, increase the probability of psychological disorders. We also know that dominant limb is controlled by dominant hemisphere and limb preference is used as an indicator for hemisphere dominance. In this study we attempted to explore the hemispheric dominance by the use of three limbs (hand, foot and eye). We performed this survey on two samples, psychiatric patients compared with normal population. For this purpose, knowing that the organ dominance is stabilized in adolescence, and age has no effect on the people above 15, we used 48 high school girls and 65 boys as the final samples of normal population. The patient group included 57 male and 26 female who were chronic psychiatric patients. The result shows that left-eye dominance is more in patients than the normal group (P=0.000) but the handedness and footedness differences are not significance. In psychotic, bipolar and depressive disorders, eye dominance had significant difference (P=0.018). But this is not true about hand and foot dominance. Our findings proved that generally in psychiatric patients, left-eye dominance is more common, left-eye dominance is also more in psychotic and depressive disorders. It is less common in bipolar disorders.

  9. [The hands--means of expression for relationship, creativity and coping].

    Science.gov (United States)

    Kinzl, J F

    2008-02-01

    The hand is an important active sense organ. Hands help human beings to organise and change the environment, and most developments of mankind would not have been possible without hands. The hands also made a great contribution to the high degree of brain development. The hands are an important characteristic feature of identity and identification and sign of individuality as well. Human hands have a long history as an instrument of social interaction and as an object of social attention. The language of hands offers many kinds of expression and gestures help to underline the words.

  10. Mothers' labeling responses to infants' gestures predict vocabulary outcomes.

    Science.gov (United States)

    Olson, Janet; Masur, Elise Frank

    2015-11-01

    Twenty-nine infants aged 1;1 and their mothers were videotaped while interacting with toys for 18 minutes. Six experimental stimuli were presented to elicit infant communicative bids in two communicative intent contexts - proto-declarative and proto-imperative. Mothers' verbal responses to infants' gestural and non-gestural communicative bids were coded for object and action labels. Relations between maternal labeling responses and infants' vocabularies at 1;1 and 1;5 were examined. Mothers' labeling responses to infants' gestural communicative bids were concurrently and predictively related to infants' vocabularies, whereas responses to non-gestural communicative bids were not. Mothers' object labeling following gestures in the proto-declarative context mediated the association from infants' gesturing in the proto-declarative context to concurrent noun lexicons and was the strongest predictor of subsequent noun lexicons. Mothers' action labeling after infants' gestural bids in the proto-imperative context predicted infants' acquisition of action words at 1;5. Findings show that mothers' responsive labeling explain specific relations between infants' gestures and their vocabulary development.

  11. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time.

    Science.gov (United States)

    Küssner, Mats B; Tidhar, Dan; Prior, Helen M; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures-accounting for the intrinsic link between movement and sound-are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones-continually sounding and concurrently varied in pitch, loudness and tempo-with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising-falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.

  12. Asymmetric coupling between gestures and speech during reasoning

    NARCIS (Netherlands)

    Hoekstra, Lisette

    2017-01-01

    When children learn, insights displayed in gestures typically precede insights displayed in speech. In this study, we investigated how this leading role of gestures in cognitive development is evident in (and emerges from) the dynamic coupling between gestures and speech during one task. We

  13. Spatial reference in a bonobo gesture.

    Science.gov (United States)

    Genty, Emilie; Zuberbühler, Klaus

    2014-07-21

    Great apes frequently produce gestures during social interactions to communicate in flexible, goal-directed ways [1-3], a feature with considerable relevance for the ongoing debate over the evolutionary origins of human language [1, 4]. But despite this shared feature with language, there has been a lack of evidence for semantic content in ape gestures. According to one authoritative view, ape gestures thus do not have any specific referential, iconic, or deictic content, a fundamental difference versus human gestures and spoken language [1, 5] that suggests these features have a more recent origin in human evolution, perhaps caused by a fundamental transition from ape-like individual intentionality to human-like shared intentionality [6]. Here, we revisit this human uniqueness claim with a study of a previously undescribed human-like beckoning gesture in bonobos that has potentially both deictic and iconic character. We analyzed beckoning in two groups of bonobos, kept under near natural environmental and social conditions at the Lola Ya Bonobo sanctuary near Kinshasa, Democratic Republic of Congo, in terms of its linguistic content and underlying communicative intention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Talk to the virtual hands: self-animated avatars improve communication in head-mounted display virtual environments.

    Directory of Open Access Journals (Sweden)

    Trevor J Dodds

    Full Text Available BACKGROUND: When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. PRINCIPAL FINDINGS: In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose. Participants 'passed' (gave up describing significantly more words when they were talking to a static avatar (no nonverbal feedback available. In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. CONCLUSIONS: Taken together, the studies show how (a virtual reality can be used to systematically study the influence of body gestures; (b it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant; and (c there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.

  15. Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape...

  16. Nonsymbolic Gestural Interaction for Ambient Intelligence

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2010-01-01

    the addressee with subtle clues about personality or cultural background. Gestures are an extremly rich source of communication-specific and contextual information for interactions in ambient intelligence environments. This chapter reviews the semantic layers of gestural interaction, focusing on the layer...

  17. Specialization of the left supramarginal gyrus for hand-independent praxis representation is not related to hand dominance

    Science.gov (United States)

    Króliczak, Gregory; Piper, Brian J.; Frey, Scott H.

    2016-01-01

    Data from focal brain injury and functional neuroimaging studies implicate a distributed network of parieto-fronto-temporal areas in the human left cerebral hemisphere as playing distinct roles in the representation of meaningful actions (praxis). Because these data come primarily from right-handed individuals, the relationship between left cerebral specialization for praxis representation and hand dominance remains unclear. We used functional magnetic resonance imaging (fMRI) to evaluate the hypothesis that strongly left-handed (right hemisphere motor dominant) adults also exhibit this left cerebral specialization. Participants planned familiar actions for subsequent performance with the left or right hand in response to transitive (e.g., “pounding”) or intransitive (e.g. “waving”) action words. In linguistic control trials, cues denoted non-physical actions (e.g., “believing”). Action planning was associated with significant, exclusively left-lateralized and extensive increases of activity in the supramarginal gyrus (SMg), and more focal modulations in the left caudal middle temporal gyrus (cMTg). This activity was hand- and gesture-independent, i.e., unaffected by the hand involved in subsequent action performance, and the type of gesture (i.e., transitive or intransitive). Compared directly with right-handers, left-handers exhibited greater involvement of the right angular gyrus (ANg) and dorsal premotor cortex (dPMC), which is indicative of a less asymmetric functional architecture for praxis representation. We therefore conclude that the organization of mechanisms involved in planning familiar actions is influenced by one’s motor dominance. However, independent of hand dominance, the left SMg and cMTg are specialized for ideomotor transformations—the integration of conceptual knowledge and motor representations into meaningful actions. These findings support the view that higher-order praxis representation and lower-level motor dominance rely

  18. What is the best strategy for retaining gestures in working memory?

    Science.gov (United States)

    Gimenes, Guillaume; Pennequin, Valérie; Mercer, Tom

    2016-07-01

    This study aimed to determine whether the recall of gestures in working memory could be enhanced by verbal or gestural strategies. We also attempted to examine whether these strategies could help resist verbal or gestural interference. Fifty-four participants were divided into three groups according to the content of the training session. This included a control group, a verbal strategy group (where gestures were associated with labels) and a gestural strategy group (where participants repeated gestures and were told to imagine reproducing the movements). During the experiment, the participants had to reproduce a series of gestures under three conditions: "no interference", gestural interference (gestural suppression) and verbal interference (articulatory suppression). The results showed that task performance was enhanced in the verbal strategy group, but there was no significant difference between the gestural strategy and control groups. Moreover, compared to the "no interference" condition, performance decreased in the presence of gestural interference, except within the verbal strategy group. Finally, verbal interference hindered performance in all groups. The discussion focuses on the use of labels to recall gestures and differentiates the induced strategies from self-initiated strategies.

  19. Gesture-controlled interfaces for self-service machines and other applications

    Science.gov (United States)

    Cohen, Charles J. (Inventor); Beach, Glenn (Inventor); Cavell, Brook (Inventor); Foulk, Gene (Inventor); Jacobus, Charles J. (Inventor); Obermark, Jay (Inventor); Paul, George (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  20. Eye-in-Hand Manipulation for Remote Handling: Experimental Setup

    Science.gov (United States)

    Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador

    2018-03-01

    A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.

  1. Iconic Gestures as Undervalued Representations during Science Teaching

    Science.gov (United States)

    Chue, Shien; Lee, Yew-Jin; Tan, Kim Chwee Daniel

    2015-01-01

    Iconic gestures that are ubiquitous in speech are integral to human meaning-making. However, few studies have attempted to map out the role of these gestures in science teaching. This paper provides a review of existing literature in everyday communication and education to articulate potential contributions of iconic gestures for science teaching.…

  2. Aspects of the Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2006-01-01

    is finalized instantly with one upward gesture. Several synthesis methods are presented and the control mechanisms are mapped into the multiple musical gesture interface. This enables a number of performers to interact on the same interface, either by each playing the same musical instruments simultaneously...

  3. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    Science.gov (United States)

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2018-06-01

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  4. Gesture analysis of students' majoring mathematics education in micro teaching process

    Science.gov (United States)

    Maldini, Agnesya; Usodo, Budi; Subanti, Sri

    2017-08-01

    In the process of learning, especially math learning, process of interaction between teachers and students is certainly a noteworthy thing. In these interactions appear gestures or other body spontaneously. Gesture is an important source of information, because it supports oral communication and reduce the ambiguity of understanding the concept/meaning of the material and improve posture. This research which is particularly suitable for an exploratory research design to provide an initial illustration of the phenomenon. The goal of the research in this article is to describe the gesture of S1 and S2 students of mathematics education at the micro teaching process. To analyze gesture subjects, researchers used McNeil clarification. The result is two subjects using 238 gesture in the process of micro teaching as a means of conveying ideas and concepts in mathematics learning. During the process of micro teaching, subjects using the four types of gesture that is iconic gestures, deictic gesture, regulator gesturesand adapter gesture as a means to facilitate the delivery of the intent of the material being taught and communication to the listener. Variance gesture that appear on the subject due to the subject using a different gesture patterns to communicate mathematical ideas of their own so that the intensity of gesture that appeared too different.

  5. Do Verbal Children with Autism Comprehend Gesture as Readily as Typically Developing Children?

    OpenAIRE

    Dimitrova, N.; Özçalışkan, Ş.; Adamson, L.B.

    2017-01-01

    Gesture comprehension remains understudied, particularly in children with autism spectrum disorder (ASD) who have difficulties in gesture production. Using a novel gesture comprehension task, Study 1 examined how 2- to 4-year-old typically-developing (TD) children comprehend types of gestures and gesture-speech combinations, and showed better comprehension of deictic gestures and reinforcing gesture-speech combinations than iconic/conventional gestures and supplementary gesture-speech combina...

  6. Foundational Issues in Touch-Screen Stroke Gesture Design - An Integrative Review

    OpenAIRE

    Zhai , Shumin; Kristensson , Per Ola; Appert , Caroline; Andersen , Tue Haste; Cao , Xiang

    2012-01-01

    International audience; The potential for using stroke gestures to enter, retrieve and select commands and text has been recently unleashed by the popularity of touchscreen devices. This monograph provides a state-of-the-art inte- grative review of a body of human-computer interaction research on stroke gestures. It begins with an analysis of the design dimensions of stroke gestures as an interaction medium. The analysis classifies gestures into analogue versus abstract gestures, gestures for...

  7. Archetypal Gesture and Everyday Gesture: a fundamental binomial in Delsartean theory

    Directory of Open Access Journals (Sweden)

    Elena Randi

    2012-11-01

    Full Text Available This text presents François Delsarte’s system from a historical-exploratory viewpoint, focusing on some particular aspects of the work of the French master and the interpretation of his work by some of his main disciples. The article describes the status of the body and its importance in the Delsarte system, taking the notions of archetypal gesture and everyday gesture as the bases of this system. Indeed, the text highlights both historical facts obtained from the Delsarte archive, and arguments questioning the authorship of exercises attributed to Delsarte, which, according to the text, may have been created by his students.

  8. User-Generated Free-Form Gestures for Authentication: Security and Memorability

    OpenAIRE

    Sherman, Michael; Clark, Gradeigh; Yang, Yulong; Sugrim, Shridatt; Modig, Arttu; Lindqvist, Janne; Oulasvirta, Antti; Roos, Teemu

    2014-01-01

    This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metr...

  9. GESTURE-VERBAL UTTERANCES FROM THE COGNITIVE PERSPECTIVE

    OpenAIRE

    Martynyuk, Alla

    2016-01-01

    The article develops the idea of speech and gesture as an integral system of generation of meaning viewing an individual’s cognitive system as a dynamic, evolving semantic lattice organising semantic items of propositional and imagistic modes around a core meaning: linguistic items (propositions) are linked to ideas, concepts and beliefs as well as to specific feelings, mental states, images of gestures and stereotypic patterns of behaviour. Since gesture and speech are equally engaged in gen...

  10. Preserved Imitation of Known Gestures in Children with High-Functioning Autism

    Science.gov (United States)

    Carmo, Joana C.; Rumiati, Raffaella I.; Siugzdaite, Roma; Brambilla, Paolo

    2013-01-01

    It has been suggested that children with autism are particularly deficient at imitating novel gestures or gestures without goals. In the present study, we asked high-functioning autistic children and age-matched typically developing children to imitate several types of gestures that could be either already known or novel to them. Known gestures either conveyed a communicative meaning (i.e., intransitive) or involved the use of objects (i.e., transitive). We observed a significant interaction between gesture type and group of participants, with children with autism performing known gestures better than novel gestures. However, imitation of intransitive and transitive gestures did not differ across groups. These findings are discussed in light of a dual-route model for action imitation. PMID:24062956

  11. Development of the bedridden person support system using hand gesture.

    Science.gov (United States)

    Ichimura, Kouhei; Magatani, Kazushige

    2015-08-01

    The purpose of this study is to support the bedridden and physically handicapped person who live independently. In this study, we developed Electric appliances control system that can be used on the bed. The subject can control Electric appliances using hand motion. Infrared sensors of a Kinect are used for the hand motion detection. Our developed system was tested with some normal subjects and results of the experiment were evaluated. In this experiment, all subjects laid on the bed and tried to control our system. As results, most of subjects were able to control our developed system perfectly. However, motion tracking of some subject's hand was reset forcibly. It was difficult for these subjects to make the system recognize his opened hand. From these results, we think if this problem will be improved our support system will be useful for the bedridden and physically handicapped persons.

  12. Gesture Activated Mobile Edutainment (GAME)

    DEFF Research Database (Denmark)

    Rehm, Matthias; Leichtenstern, Karin; Plomer, Joerg

    2010-01-01

    An approach to intercultural training of nonverbal behavior is presented that draws from research on role-plays with virtual agents and ideas from situated learning. To this end, a mobile serious game is realized where the user acquires knowledge about German emblematic gestures and tries them out...... in role-plays with virtual agents. Gesture performance is evaluated making use of build-in acceleration sensors of smart phones. After an account of the theoretical background covering diverse areas like virtual agents, situated learning and intercultural training, the paper presents the GAME approach...... along with details on the gesture recognition and content authoring. By its experience-based role plays with virtual characters, GAME brings together ideas from situated learning and intercultural training in an integrated approach and paves the way for new m-learning concepts....

  13. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.

  14. When gestures show us the way: Co-speech gestures selectively facilitate navigation and spatial memory.

    OpenAIRE

    Galati, Alexia; Weisberg, Steven M.; Newcombe, Nora S.; Avraamides, Marios N.

    2017-01-01

    How does gesturing during route learning relate to subsequent spatial performance? We examined the relationship between gestures produced spontaneously while studying route directions and spatial representations of the navigated environment. Participants studied route directions, then navigated those routes from memory in a virtual environment, and finally had their memory of the environment assessed. We found that, for navigators with low spatial perspective-taking pe...

  15. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    Science.gov (United States)

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  16. Verbal working memory predicts co-speech gesture: evidence from individual differences.

    Science.gov (United States)

    Gillespie, Maureen; James, Ariel N; Federmeier, Kara D; Watson, Duane G

    2014-08-01

    Gesture facilitates language production, but there is debate surrounding its exact role. It has been argued that gestures lighten the load on verbal working memory (VWM; Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001), but gestures have also been argued to aid in lexical retrieval (Krauss, 1998). In the current study, 50 speakers completed an individual differences battery that included measures of VWM and lexical retrieval. To elicit gesture, each speaker described short cartoon clips immediately after viewing. Measures of lexical retrieval did not predict spontaneous gesture rates, but lower VWM was associated with higher gesture rates, suggesting that gestures can facilitate language production by supporting VWM when resources are taxed. These data also suggest that individual variability in the propensity to gesture is partly linked to cognitive capacities. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Machine Learning of Musical Gestures

    OpenAIRE

    Caramiaux, Baptiste; Tanaka, Atau

    2013-01-01

    We present an overview of machine learning (ML) techniques and theirapplication in interactive music and new digital instruments design. We firstgive to the non-specialist reader an introduction to two ML tasks,classification and regression, that are particularly relevant for gesturalinteraction. We then present a review of the literature in current NIMEresearch that uses ML in musical gesture analysis and gestural sound control.We describe the ways in which machine learning is useful for cre...

  18. Gesture Analysis for Physics Education Researchers

    Science.gov (United States)

    Scherr, Rachel E.

    2008-01-01

    Systematic observations of student gestures can not only fill in gaps in students' verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to…

  19. A novel integrative method for analyzing eye and hand behaviour during reaching and grasping in an MRI environment.

    Science.gov (United States)

    Lawrence, Jane M; Abhari, Kamyar; Prime, Steven L; Meek, Benjamin P; Desanghere, Loni; Baugh, Lee A; Marotta, Jonathan J

    2011-06-01

    The development of noninvasive neuroimaging techniques, such as fMRI, has rapidly advanced our understanding of the neural systems underlying the integration of visual and motor information. However, the fMRI experimental design is restricted by several environmental elements, such as the presence of the magnetic field and the restricted view of the participant, making it difficult to monitor and measure behaviour. The present article describes a novel, specialized software package developed in our laboratory called Biometric Integration Recording and Analysis (BIRA). BIRA integrates video with kinematic data derived from the hand and eye, acquired using MRI-compatible equipment. The present article demonstrates the acquisition and analysis of eye and hand data using BIRA in a mock (0 Tesla) scanner. A method for collecting and integrating gaze and kinematic data in fMRI studies on visuomotor behaviour has several advantages: Specifically, it will allow for more sophisticated, behaviourally driven analyses and eliminate potential confounds of gaze or kinematic data.

  20. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  1. Great ape gestures: intentional communication with a rich set of innate signals.

    Science.gov (United States)

    Byrne, R W; Cartmill, E; Genty, E; Graham, K E; Hobaiter, C; Tanner, J

    2017-09-08

    Great apes give gestures deliberately and voluntarily, in order to influence particular target audiences, whose direction of attention they take into account when choosing which type of gesture to use. These facts make the study of ape gesture directly relevant to understanding the evolutionary precursors of human language; here we present an assessment of ape gesture from that perspective, focusing on the work of the "St Andrews Group" of researchers. Intended meanings of ape gestures are relatively few and simple. As with human words, ape gestures often have several distinct meanings, which are effectively disambiguated by behavioural context. Compared to the signalling of most other animals, great ape gestural repertoires are large. Because of this, and the relatively small number of intended meanings they achieve, ape gestures are redundant, with extensive overlaps in meaning. The great majority of gestures are innate, in the sense that the species' biological inheritance includes the potential to develop each gestural form and use it for a specific range of purposes. Moreover, the phylogenetic origin of many gestures is relatively old, since gestures are extensively shared between different genera in the great ape family. Acquisition of an adult repertoire is a process of first exploring the innate species potential for many gestures and then gradual restriction to a final (active) repertoire that is much smaller. No evidence of syntactic structure has yet been detected.

  2. Device Control Using Gestures Sensed from EMG

    Science.gov (United States)

    Wheeler, Kevin R.

    2003-01-01

    In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.

  3. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  4. View invariant gesture recognition using the CSEMSwissRanger SR-2 camera

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    by a hysteresis bandpass filter. Gestures are represented by concatenating harmonic shape contexts over time. This representation allows for a view invariant matching of the gestures. The system is trained on gestures from one viewpoint and evaluated on gestures from other viewpoints. The results show...

  5. Gesture and naming therapy for people with severe aphasia: a group study.

    Science.gov (United States)

    Marshall, Jane; Best, Wendy; Cocks, Naomi; Cruice, Madeline; Pring, Tim; Bulcock, Gemma; Creek, Gemma; Eales, Nancy; Mummery, Alice Lockhart; Matthews, Niina; Caute, Anna

    2012-06-01

    In this study, the authors (a) investigated whether a group of people with severe aphasia could learn a vocabulary of pantomime gestures through therapy and (b) compared their learning of gestures with their learning of words. The authors also examined whether gesture therapy cued word production and whether naming therapy cued gestures. Fourteen people with severe aphasia received 15 hr of gesture and naming treatments. Evaluations comprised repeated measures of gesture and word production, comparing treated and untreated items. Baseline measures were stable but improved significantly following therapy. Across the group, improvements in naming were greater than improvements in gesture. This trend was evident in most individuals' results, although 3 participants made better progress in gesture. Gains were item specific, and there was no evidence of cross-modality cueing. Items that received gesture therapy did not improve in naming, and items that received naming therapy did not improve in gesture. Results show that people with severe aphasia can respond to gesture and naming therapies. Given the unequal gains, naming may be a more productive therapy target than gesture for many (although not all) individuals with severe aphasia. The communicative benefits of therapy were not examined but are addressed in a follow-up article.

  6. How iconic gestures enhance communication: an ERP study.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2007-06-01

    EEG was recorded as adults watched short segments of spontaneous discourse in which the speaker's gestures and utterances contained complementary information. Videos were followed by one of four types of picture probes: cross-modal related probes were congruent with both speech and gestures; speech-only related probes were congruent with information in the speech, but not the gesture; and two sorts of unrelated probes were created by pairing each related probe with a different discourse prime. Event-related potentials (ERPs) elicited by picture probes were measured within the time windows of the N300 (250-350 ms post-stimulus) and N400 (350-550 ms post-stimulus). Cross-modal related probes elicited smaller N300 and N400 than speech-only related ones, indicating that pictures were easier to interpret when they corresponded with gestures. N300 and N400 effects were not due to differences in the visual complexity of each probe type, since the same cross-modal and speech-only picture probes elicited N300 and N400 with similar amplitudes when they appeared as unrelated items. These findings extend previous research on gesture comprehension by revealing how iconic co-speech gestures modulate conceptualization, enabling listeners to better represent visuo-spatial aspects of the speaker's meaning.

  7. Gestures, but Not Meaningless Movements, Lighten Working Memory Load when Explaining Math

    Science.gov (United States)

    Cook, Susan Wagner; Yip, Terina Kuangyi; Goldin-Meadow, Susan

    2012-01-01

    Gesturing is ubiquitous in communication and serves an important function for listeners, who are able to glean meaningful information from the gestures they see. But gesturing also functions for speakers, whose own gestures reduce demands on their working memory. Here we ask whether gesture's beneficial effects on working memory stem from its…

  8. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  9. The significance of clumsy gestures in apraxia following a left hemisphere stroke.

    Science.gov (United States)

    Kangas, Maria; Tate, Robyn L

    2006-02-01

    Individuals who sustain a cerebrovascular accident (CVA) in the dominant (typically left) hemisphere, are at increased risk of developing motor skill deficits due to motor-sensory impairments, as well as cognitive impairments (e.g., apraxia). Clumsiness is a central component affecting motor skills in individuals with a left hemisphere CVA (LCVA). The term "clumsiness" however, has not been adequately operationalised in the apraxia literature in clinical terms, thereby making diagnosis difficult and its contribution to apraxic disorders uncertain. Accordingly, in this study "clumsiness" was explicitly defined by establishing a set of four criteria. The non-dominant (left) hand movements of three groups of participants were examined: 10 individuals with limb-apraxia (APX); 8 individuals without limb apraxia who had sustained a LCVA (NAPX); and 19 healthy individuals without a history of brain impairment (NBD). Performance was examined on four sets of motor tasks, including a conventional praxis test, basic perceptual-motor co-ordination and fine movement tasks, and a naturalistic actions test. A striking finding that emerged was that clumsy errors occurred frequently in all groups, including the NBD group, particularly on the praxis and fine motor tasks. In terms of quantity of clumsy errors emitted, the APX group made significantly more clumsy gestures across all four tasks in comparison to the NBD group. No differences emerged between the two clinical groups, however, in terms of total clumsy gestures emitted on the naturalistic action tasks, or the type of clumsy errors emitted on the fine motor tasks. Thus, frequency and types of clumsy gestures were partly determined by task demands. These results highlight the need to consider the contribution of clumsy gestures in limb functioning following hemispheric brain damage. In broad terms, these findings emphasise the importance of adopting more detailed analyses of movement errors in apraxia and assessments of

  10. Interactive Explanations: The Functional Role of Gestural and Bodily Action for Explaining and Learning Scientific Concepts in Face-to-Face Arrangements

    Science.gov (United States)

    Scopelitis, Stephanie A.

    As human beings, we live in, live with, and live through our bodies. And because of this it is no wonder that our hands and bodies are in motion as we interact with others in our world. Hands and body move as we give directions to another, anticipate which way to turn the screwdriver, and direct our friend to come sit next to us. Gestures, indeed, fill our everyday lives. The purpose of this study is to investigate the functional role of the body in the parts of our lives where we teach and learn with another. This project is an investigation into, what I call, "interactive explanations". I explore how the hands and body work toward the joint achievement of explanation and learning in face-to-face arrangements. The study aims to uncover how the body participates in teaching and learning in and across events as it slides between the multiple, interdependent roles of (1) a communicative entity, (2) a tool for thinking, and (3) a resource to shape interaction. Understanding gestures functional roles as flexible and diverse better explains how the body participates in teaching and learning interactions. The study further aims to show that these roles and functions are dynamic and changeable based on the interests, goals and contingencies of participants' changing roles and aims in interactions, and within and across events. I employed the methodology of comparative microanalysis of pairs of videotaped conversations in which, first, experts in STEM fields (Science, Technology, Engineering and Mathematics) explained concepts to non-experts, and second, these non-experts re-explained the concept to other non-experts. The principle finding is that people strategically, creatively and collaboratively employ the hands and body as vital and flexible resources for the joint achievement of explanation and understanding. Findings further show that gestures used to explain complex STEM concepts travel across time with the non-expert into re-explanations of the concept. My

  11. Complementary hand responses occur in both peri- and extrapersonal space

    NARCIS (Netherlands)

    Faber, T.W.; van Elk, M.; Jonas, K.J.

    2016-01-01

    Human beings have a strong tendency to imitate. Evidence from motor priming paradigms suggests that people automatically tend to imitate observed actions such as hand gestures by performing mirror-congruent movements (e.g., lifting one’s right finger upon observing a left finger movement; from a

  12. Domestic dogs' (Canis familiaris) choices in reference to information provided by human and artificial hands.

    Science.gov (United States)

    Kundey, Shannon M A; Delise, Justin; De Los Reyes, Andres; Ford, Kathy; Starnes, Blair; Dennen, Weston

    2014-03-01

    Even young humans show sensitivity to the accuracy and reliability of informants' reports. Children are selective in soliciting information and in accepting claims. Recent research has also investigated domestic dogs' (Canis familiaris) sensitivity to agreement among human informants. Such research utilizing a common human pointing gesture to which dogs are sensitive in a food retrieval paradigm suggests that dogs might choose among informants according to the number of points exhibited, rather than the number of individuals indicating a particular location. Here, we further investigated dogs' use of information from human informants using a stationary pointing gesture, as well as the conditions under which dogs would utilize a stationary point. First, we explored whether the number of points or the number of individuals more strongly influenced dogs' choices. To this end, dogs encountered a choice situation in which the number of points exhibited toward a particular location and the number of individuals exhibiting those points conflicted. Results indicated that dogs chose in accordance with the number of points exhibited toward a particular location. In a second experiment, we explored the possibility that previously learned associations drove dogs' responses to the stationary pointing gesture. In this experiment, dogs encountered a choice situation in which artificial hands exhibited a stationary pointing gesture toward or away from choice locations in the absence of humans. Dogs chose the location to which the artificial hand pointed. These results are consistent with the notion that dogs may respond to a human pointing gesture due to their past-learning history.

  13. Contribution of Leg Muscle Explosive Power and Eye-Hand Coordination to The Accuracy Smash of Athletes in Volleyball Club of Universitas Islam Riau

    Directory of Open Access Journals (Sweden)

    Mimi Yulianti

    2017-11-01

    Full Text Available The purpose of this study was to determine the contribution of leg muscle explosive power and eye-hand coordination. The type of research was correlational. The population in this study was all athletes who actively follow the training as many as 20 people and using total sampling technique. Thus the sample in this study amounted to 20 men athletes. The data were collected using the measurement test on the three variables: the leg muscle explosive power data was using vertical jump test, eyehand coordination was using ballwerfen und fangen test and smash accuracy was using smash accuracy test. The data were analyzed by product moment correlation and double correlation and then continued with contribution of the determinant formula. Based on data analysis found that there was contribution of leg muscle explosive power equal to 35,52%, eye-hand coordination equal to 20,79%, and both equal to 40,70% regarding to the accuracy smash of volleyball atletes of Universitas Islam Riau. It was concluded that there was contribution of leg muscle explosive power and eye-hand coordination to the smash accuracy of volleyball athlete of Universitas Islam Riau.

  14. Pitch Gestures in Generative Modeling of Music

    DEFF Research Database (Denmark)

    Jensen, Kristoffer

    2011-01-01

    Generative models of music are in need of performance and gesture additions, i.e. inclusions of subtle temporal and dynamic alterations, and gestures so as to render the music musical. While much of the research regarding music generation is based on music theory, the work presented here is based...

  15. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Science.gov (United States)

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  16. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Directory of Open Access Journals (Sweden)

    Paul Adam Bremner

    2016-02-01

    Full Text Available Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realised remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  17. Natural gesture interfaces

    Science.gov (United States)

    Starodubtsev, Illya

    2017-09-01

    The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.

  18. “TOT” phenomena: Gesture production in younger and older adults

    OpenAIRE

    Theochaaropoulou, F.; Cocks, N.; Pring, T.; Dipper, L.

    2015-01-01

    This study explored age-related changes in gesture in order to better understand the relationship between gesture and word retrieval from memory. The frequency of gestures during “Tip-of-the-Tongue” (TOT) states highlights this relationship. There is a lack of evidence describing the form and content of iconic gestures arising spontaneously in such TOT states, and a parallel gap addressing age-related variations. In this study, TOT states were induced in 45 participants from two age groups (o...

  19. Recognizing Stress Using Semantics and Modulation of Speech and Gestures

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2016-01-01

    This paper investigates how speech and gestures convey stress, and how they can be used for automatic stress recognition. As a first step, we look into how humans use speech and gestures to convey stress. In particular, for both speech and gestures, we distinguish between stress conveyed by the

  20. Co-Thought and Co-Speech Gestures Are Generated by the Same Action Generation Process

    Science.gov (United States)

    Chu, Mingyuan; Kita, Sotaro

    2016-01-01

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments…

  1. Seeing iconic gestures while encoding events facilitates children's memory of these events

    OpenAIRE

    Aussems, Suzanne; Kita, Sotaro

    2017-01-01

    An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control co...

  2. Enhancing Communication through Gesture and Naming Therapy

    Science.gov (United States)

    Caute, Anna; Pring, Tim; Cocks, Naomi; Cruice, Madeline; Best, Wendy; Marshall, Jane

    2013-01-01

    Purpose: In this study, the authors investigated whether gesture, naming, and strategic treatment improved the communication skills of 14 people with severe aphasia. Method: All participants received 15 hr of gesture and naming treatment (reported in a companion article [Marshall et al., 2012]). Half the group received a further 15 hr of strategic…

  3. Gestural acquisition in great apes: the Social Negotiation Hypothesis.

    Science.gov (United States)

    Pika, Simone; Fröhlich, Marlen

    2018-01-24

    Scientific interest in the acquisition of gestural signalling dates back to the heroic figure of Charles Darwin. More than a hundred years later, we still know relatively little about the underlying evolutionary and developmental pathways involved. Here, we shed new light on this topic by providing the first systematic, quantitative comparison of gestural development in two different chimpanzee (Pan troglodytes verus and Pan troglodytes schweinfurthii) subspecies and communities living in their natural environments. We conclude that the three most predominant perspectives on gestural acquisition-Phylogenetic Ritualization, Social Transmission via Imitation, and Ontogenetic Ritualization-do not satisfactorily explain our current findings on gestural interactions in chimpanzees in the wild. In contrast, we argue that the role of interactional experience and social exposure on gestural acquisition and communicative development has been strongly underestimated. We introduce the revised Social Negotiation Hypothesis and conclude with a brief set of empirical desiderata for instigating more research into this intriguing research domain.

  4. Humanoid Upper Torso Complexity for Displaying Gestures

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    2008-11-01

    Full Text Available Body language is an important part of human-to-human communication; therefore body language in humanoid robots is very important for successful communication and social interaction with humans. The number of degrees of freedom (d.o.f necessary to achieve realistic body language in robots has been investigated. Using animation, three robots were simulated performing body language gestures; the complex model was given 25 d.o.f, the simplified model 18 d.o.f and the basic model 10 d.o.f. A subjective survey was created online using these animations, to obtain people's opinions on the realism of the gestures and to see if they could recognise the emotions portrayed. It was concluded that the basic system was the least realistic, complex system the most realistic, and the simplified system was only slightly less realistic than the human. Modular robotic joints were then fabricated so that the gestures could be implemented experimentally. The experimental results demonstrate that through simplification of the required degrees of freedom, the gestures can be experimentally reproduced.

  5. Towards a Description of East African Gestures

    Science.gov (United States)

    Creider, Chet A.

    1977-01-01

    This paper describes the gestural behavior of four tribal groups, Kipsigis, Luo, Gusii, and Samburu, observed and elicted in the course of two and one-half years of field work in Western Kenya in 1970-72. The gestures are grouped into four categories: (1) initiators and finalizers of interaction; (2) imperatives; (3) responses; (4) qualifiers.…

  6. Hand Hygiene With Alcohol-Based Hand Rub: How Long Is Long Enough?

    Science.gov (United States)

    Pires, Daniela; Soule, Hervé; Bellissimo-Rodrigues, Fernando; Gayet-Ageron, Angèle; Pittet, Didier

    2017-05-01

    BACKGROUND Hand hygiene is the core element of infection prevention and control. The optimal hand-hygiene gesture, however, remains poorly defined. OBJECTIVE We aimed to evaluate the influence of hand-rubbing duration on the reduction of bacterial counts on the hands of healthcare personnel (HCP). METHODS We performed an experimental study based on the European Norm 1500. Hand rubbing was performed for 10, 15, 20, 30, 45, or 60 seconds, according to the WHO technique using 3 mL alcohol-based hand rub. Hand contamination with E. coli ATCC 10536 was followed by hand rubbing and sampling. A generalized linear mixed model with a random effect on the subject adjusted for hand size and gender was used to analyze the reduction in bacterial counts after each hand-rubbing action. In addition, hand-rubbing durations of 15 and 30 seconds were compared to assert non-inferiority (0.6 log10). RESULTS In total, 32 HCP performed 123 trials. All durations of hand rubbing led to significant reductions in bacterial counts (Phand rubbing were not significantly different from those obtained after 30 seconds. The mean bacterial reduction after 15 seconds of hand rubbing was 0.11 log10 lower (95% CI, -0.46 to 0.24) than after 30 seconds, demonstrating non-inferiority. CONCLUSIONS Hand rubbing for 15 seconds was not inferior to 30 seconds in reducing bacterial counts on hands under the described experimental conditions. There was no gain in reducing bacterial counts from hand rubbing longer than 30 seconds. Further studies are needed to assess the clinical significance of our findings. Infect Control Hosp Epidemiol 2017;38:547-552.

  7. Pointing and tracing gestures may enhance anatomy and physiology learning.

    Science.gov (United States)

    Macken, Lucy; Ginns, Paul

    2014-07-01

    Currently, instructional effects generated by Cognitive load theory (CLT) are limited to visual and auditory cognitive processing. In contrast, "embodied cognition" perspectives suggest a range of gestures, including pointing, may act to support communication and learning, but there is relatively little research showing benefits of such "embodied learning" in the health sciences. This study investigated whether explicit instructions to gesture enhance learning through its cognitive effects. Forty-two university-educated adults were randomly assigned to conditions in which they were instructed to gesture, or not gesture, as they learnt from novel, paper-based materials about the structure and function of the human heart. Subjective ratings were used to measure levels of intrinsic, extraneous and germane cognitive load. Participants who were instructed to gesture performed better on a knowledge test of terminology and a test of comprehension; however, instructions to gesture had no effect on subjective ratings of cognitive load. This very simple instructional re-design has the potential to markedly enhance student learning of typical topics and materials in the health sciences and medicine.

  8. ENVIRONMENT INDEPENDENT DIRECTIONAL GESTURE RECOGNITION TECHNIQUE FOR ROBOTS USING MULTIPLE DATA FUSION

    Directory of Open Access Journals (Sweden)

    Kishore Abishek

    2013-10-01

    Full Text Available A technique is presented here for directional gesture recognition by robots. The usual technique employed now is using camera vision and image processing. One major disadvantage with that is the environmental constrain. The machine vision system has a lot of lighting constrains. It is therefore only possible to use that technique in a conditioned environment, where the lighting is compatible with camera system used. The technique presented here is designed to work in any environment. It does not employ machine vision. It utilizes a set of sensors fixed on the hands of a human to identify the direction in which the hand is pointing. This technique uses cylindrical coordinate system to precisely find the direction. A programmed computing block in the robot identifies the direction accurately within the given range.

  9. A neuropsychological approach to the study of gesture and pantomime in aphasa

    Directory of Open Access Journals (Sweden)

    Jocelyn Kadish

    1978-11-01

    Full Text Available The impairment of  gesture and pantomime in aphasia was examined from  a neuropsychological perspective. The Boston Diagnostic Test of  Aphasia, Luria's Neuro-psychological Investigation, Pickett's Tests for  gesture and pantomime and the Performance Scale of  the Wechsler Adult Intelligence Scale were administered to six aphasic subjects with varying etiology and severity. Results indicated that severity of  aphasia was positively related to severity of  gestural disturbance; gestural ability was associated with verbal and non-linguistic aspects of  ability, within receptive and expressive levels respectively; performance  on gestural tasks was superior to that on verbal tasks irrespective of  severity of aphasia; damage to Luria's second and third functional  brain units were positively related to deficits  in receptive and expressive gesture respectively; no relationship was found  between seventy of  general intellectual impairment and gestural deficit.  It was concluded that the gestural impairment may best be understood as a breakdown in complex sequential manual motor activity. Theoretical and therapeutic implications were discussed.

  10. Iconic gestures prime related concepts: an ERP study.

    Science.gov (United States)

    Wu, Ying Croon; Coulson, Seana

    2007-02-01

    To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.

  11. Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training.

    Science.gov (United States)

    Kutafina, Ekaterina; Laukamp, David; Bettermann, Ralf; Schroeder, Ulrik; Jonas, Stephan M

    2016-08-03

    In this paper, we propose a novel approach to eLearning that makes use of smart wearable sensors. Traditional eLearning supports the remote and mobile learning of mostly theoretical knowledge. Here we discuss the possibilities of eLearning to support the training of manual skills. We employ forearm armbands with inertial measurement units and surface electromyography sensors to detect and analyse the user's hand motions and evaluate their performance. Hand hygiene is chosen as the example activity, as it is a highly standardized manual task that is often not properly executed. The World Health Organization guidelines on hand hygiene are taken as a model of the optimal hygiene procedure, due to their algorithmic structure. Gesture recognition procedures based on artificial neural networks and hidden Markov modeling were developed, achieving recognition rates of 98 . 30 % ( ± 1 . 26 % ) for individual gestures. Our approach is shown to be promising for further research and application in the mobile eLearning of manual skills.

  12. Modelling gesture use and early language development in autism spectrum disorder.

    Science.gov (United States)

    Manwaring, Stacy S; Mead, Danielle L; Swineford, Lauren; Thurm, Audrey

    2017-09-01

    Nonverbal communication abilities, including gesture use, are impaired in autism spectrum disorder (ASD). However, little is known about how common gestures may influence or be influenced by other areas of development. To examine the relationships between gesture, fine motor and language in young children with ASD compared with a comparison group using multiple measures and methods in a structural equation modelling framework. Participants included 110 children with ASD and a non-ASD comparison group of 87 children (that included children with developmental delays (DD) or typical development (TD)), from 12 to 48 months of age. A construct of gesture use as measured by the Communication and Symbolic Behavior Scales-Developmental Profile Caregiver Questionnaire (CQ) and the Autism Diagnostic Observation Schedule (ADOS), as well as fine motor from the Mullen Scales of Early Learning and Vineland Adaptive Behavior Scales-II (VABS-II) was examined using second-order confirmatory factor analysis (CFA). A series of structural equation models then examined concurrent relationships between the aforementioned latent gesture construct and expressive and receptive language. A series of hierarchical regression analyses was run in a subsample of 36 children with ASD with longitudinal data to determine how gesture factor scores predicted later language outcomes. Across study groups, the gesture CFA model with indicators of gesture use from both the CQ (parent-reported) and ADOS (direct observation), and measures of fine motor provided good fit with all indicators significantly and strongly loading onto one gesture factor. This model of gesture use, controlling for age, was found to correlate strongly with concurrent expressive and receptive language. The correlations between gestures and concurrent language were similar in magnitude in both the ASD and non-ASD groups. In the longitudinal subsample of children with ASD, gestures at time 1 predicted later receptive (but not

  13. An investigation of co-speech gesture production during action description in Parkinson's disease.

    Science.gov (United States)

    Cleary, Rebecca A; Poliakoff, Ellen; Galpin, Adam; Dick, Jeremy P R; Holler, Judith

    2011-12-01

    Parkinson's disease (PD) can impact enormously on speech communication. One aspect of non-verbal behaviour closely tied to speech is co-speech gesture production. In healthy people, co-speech gestures can add significant meaning and emphasis to speech. There is, however, little research into how this important channel of communication is affected in PD. The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area. Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced. This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Towards successful user interaction with systems: focusing on user-derived gestures for smart home systems.

    Science.gov (United States)

    Choi, Eunjung; Kwon, Sunghyuk; Lee, Donghun; Lee, Hogin; Chung, Min K

    2014-07-01

    Various studies that derived gesture commands from users have used the frequency ratio to select popular gestures among the users. However, the users select only one gesture from a limited number of gestures that they could imagine during an experiment, and thus, the selected gesture may not always be the best gesture. Therefore, two experiments including the same participants were conducted to identify whether the participants maintain their own gestures after observing other gestures. As a result, 66% of the top gestures were different between the two experiments. Thus, to verify the changed gestures between the two experiments, a third experiment including another set of participants was conducted, which showed that the selected gestures were similar to those from the second experiment. This finding implies that the method of using the frequency in the first step does not necessarily guarantee the popularity of the gestures. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. The relative timing between eye and hand rapid sequential pointing is affected by time pressure, but not by advance knowledge

    NARCIS (Netherlands)

    Deconinck, F.; van Polanen, V.; Savelsbergh, G.J.P.; Bennett, S.

    2011-01-01

    The present study examined the effect of timing constraints and advance knowledge on eye-hand coordination strategy in a sequential pointing task. Participants were required to point at two successively appearing targets on a screen while the inter-stimulus interval (ISI) and the trial order were

  16. Co-speech iconic gestures and visuo-spatial working memory.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2014-11-01

    Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2012-07-01

    Full Text Available This paper presents a real-time remote robot teleoperation method using markerless Kinect-based hand tracking. Using this tracking algorithm, the positions of index finger and thumb in 3D can be estimated by processing depth images from Kinect. The hand pose is used as a model to specify the pose of a real-time remote robot's end-effector. This method provides a way to send a whole task to a remote robot instead of sending limited motion commands like gesture-based approaches and this method has been tested in pick-and-place tasks.

  18. Multisensory integration: the case of a time window of gesture-speech integration.

    Science.gov (United States)

    Obermeier, Christian; Gunter, Thomas C

    2015-02-01

    This experiment investigates the integration of gesture and speech from a multisensory perspective. In a disambiguation paradigm, participants were presented with short videos of an actress uttering sentences like "She was impressed by the BALL, because the GAME/DANCE...." The ambiguous noun (BALL) was accompanied by an iconic gesture fragment containing information to disambiguate the noun toward its dominant or subordinate meaning. We used four different temporal alignments between noun and gesture fragment: the identification point (IP) of the noun was either prior to (+120 msec), synchronous with (0 msec), or lagging behind the end of the gesture fragment (-200 and -600 msec). ERPs triggered to the IP of the noun showed significant differences for the integration of dominant and subordinate gesture fragments in the -200, 0, and +120 msec conditions. The outcome of this integration was revealed at the target words. These data suggest a time window for direct semantic gesture-speech integration ranging from at least -200 up to +120 msec. Although the -600 msec condition did not show any signs of direct integration at the homonym, significant disambiguation was found at the target word. An explorative analysis suggested that gesture information was directly integrated at the verb, indicating that there are multiple positions in a sentence where direct gesture-speech integration takes place. Ultimately, this would implicate that in natural communication, where a gesture lasts for some time, several aspects of that gesture will have their specific and possibly distinct impact on different positions in an utterance.

  19. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives.

    Science.gov (United States)

    Quinto-Pozos, David; Parrill, Fey

    2015-01-01

    Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively. Copyright © 2014 Cognitive Science Society, Inc.

  20. Effects of Ving Tsun Chinese Martial Art Training on Upper Extremity Muscle Strength and Eye-Hand Coordination in Community-Dwelling Middle-Aged and Older Adults: A Pilot Study

    Science.gov (United States)

    Ng, Shamay S. M.; Cheng, Yoyo T. Y.; Yu, Esther Y. T.; Chow, Gary C. C.; Chak, Yvonne T. C.; Chan, Ivy K. Y.; Zhang, Joni; Macfarlane, Duncan

    2016-01-01

    Objectives. To evaluate the effects of Ving Tsun (VT) martial art training on the upper extremity muscle strength and eye-hand coordination of middle-aged and older adults. Methods. This study used a nonequivalent pretest-posttest control group design. Forty-two community-dwelling healthy adults participated in the study; 24 (mean age ± SD = 68.5 ± 6.7 years) underwent VT training for 4 weeks (a supervised VT session twice a week, plus daily home practice), and 18 (mean age ± SD = 72.0 ± 6.7 years) received no VT training and acted as controls. Shoulder and elbow isometric muscle strength and eye-hand coordination were evaluated using the Lafayette Manual Muscle Test System and a computerized finger-pointing test, respectively. Results. Elbow extensor peak force increased by 13.9% (P = 0.007) in the VT group and the time to reach peak force decreased (9.9%) differentially in the VT group compared to the control group (P = 0.033). For the eye-hand coordination assessment outcomes, reaction time increased by 2.9% in the VT group and decreased by 5.3% in the control group (P = 0.002). Conclusions. Four weeks of VT training could improve elbow extensor isometric peak force and the time to reach peak force but not eye-hand coordination in community-dwelling middle-aged and older adults. PMID:27525020

  1. Effects of Ving Tsun Chinese Martial Art Training on Upper Extremity Muscle Strength and Eye-Hand Coordination in Community-Dwelling Middle-Aged and Older Adults: A Pilot Study.

    Science.gov (United States)

    Fong, Shirley S M; Ng, Shamay S M; Cheng, Yoyo T Y; Wong, Janet Y H; Yu, Esther Y T; Chow, Gary C C; Chak, Yvonne T C; Chan, Ivy K Y; Zhang, Joni; Macfarlane, Duncan; Chung, Louisa M Y

    2016-01-01

    Objectives. To evaluate the effects of Ving Tsun (VT) martial art training on the upper extremity muscle strength and eye-hand coordination of middle-aged and older adults. Methods. This study used a nonequivalent pretest-posttest control group design. Forty-two community-dwelling healthy adults participated in the study; 24 (mean age ± SD = 68.5 ± 6.7 years) underwent VT training for 4 weeks (a supervised VT session twice a week, plus daily home practice), and 18 (mean age ± SD = 72.0 ± 6.7 years) received no VT training and acted as controls. Shoulder and elbow isometric muscle strength and eye-hand coordination were evaluated using the Lafayette Manual Muscle Test System and a computerized finger-pointing test, respectively. Results. Elbow extensor peak force increased by 13.9% (P = 0.007) in the VT group and the time to reach peak force decreased (9.9%) differentially in the VT group compared to the control group (P = 0.033). For the eye-hand coordination assessment outcomes, reaction time increased by 2.9% in the VT group and decreased by 5.3% in the control group (P = 0.002). Conclusions. Four weeks of VT training could improve elbow extensor isometric peak force and the time to reach peak force but not eye-hand coordination in community-dwelling middle-aged and older adults.

  2. Effects of Ving Tsun Chinese Martial Art Training on Upper Extremity Muscle Strength and Eye-Hand Coordination in Community-Dwelling Middle-Aged and Older Adults: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Shirley S. M. Fong

    2016-01-01

    Full Text Available Objectives. To evaluate the effects of Ving Tsun (VT martial art training on the upper extremity muscle strength and eye-hand coordination of middle-aged and older adults. Methods. This study used a nonequivalent pretest-posttest control group design. Forty-two community-dwelling healthy adults participated in the study; 24 (mean age ± SD = 68.5±6.7 years underwent VT training for 4 weeks (a supervised VT session twice a week, plus daily home practice, and 18 (mean age ± SD = 72.0±6.7 years received no VT training and acted as controls. Shoulder and elbow isometric muscle strength and eye-hand coordination were evaluated using the Lafayette Manual Muscle Test System and a computerized finger-pointing test, respectively. Results. Elbow extensor peak force increased by 13.9% (P=0.007 in the VT group and the time to reach peak force decreased (9.9% differentially in the VT group compared to the control group (P=0.033. For the eye-hand coordination assessment outcomes, reaction time increased by 2.9% in the VT group and decreased by 5.3% in the control group (P=0.002. Conclusions. Four weeks of VT training could improve elbow extensor isometric peak force and the time to reach peak force but not eye-hand coordination in community-dwelling middle-aged and older adults.

  3. The importance of considering gestures in the study of current spoken Yucatec Maya.

    Directory of Open Access Journals (Sweden)

    Olivier Le Guen

    2018-02-01

    Full Text Available For centuries, linguistic description has been somehow limited because it was not possible to record audio and video. For this reason, the intrinsic multimodal nature of human language has been left out, putting aside various types of information both prosodic and visual. This work analyzes the ways in which gestures complement speech, taking into account several levels of analysis: pragmatic, semantic and syntactic; but also how some gestures can be considered linguistic signs. In order to exemplify the argumentation, I will consider the Yucatec Maya language using examples of spontaneous productions. Although certain processes presented in this work are specific to Yucatec Maya, most can be found in various languages. This paper first presents a definition of language, speech and gestures, and how one can study the way in which speech and gestures are integrated in a composite utterance. Subsequently, I analyze examples of different types of gestures in various areas of communication in Yucatec Maya, such as deictic gestures, the use of expressive gestures, metaphors and the integration of gestures at the pragmatic level. Finally, I explain how gestures can become linguistic signs in Yucatec Maya.

  4. Lexical learning in mild aphasia: gesture benefit depends on patholinguistic profile and lesion pattern.

    Science.gov (United States)

    Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth

    2013-01-01

    Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of

  5. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    Science.gov (United States)

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  6. Parent-Child Gesture Use during Problem Solving in Autistic Spectrum Disorder

    Science.gov (United States)

    Medeiros, Kristen; Winsler, Adam

    2014-01-01

    This study examined the relationship between child language skills and parent and child gestures of 58 youths with and without an autism spectrum disorder (ASD) diagnosis. Frequencies and rates of total gesture use as well as five categories of gestures (deictic, conventional, beat, iconic, and metaphoric) were reliably coded during the…

  7. Does an eye-hand coordination test have added value as part of talent identification in table tennis? A validity and reproducibility study.

    Science.gov (United States)

    Faber, Irene R; Oosterveld, Frits G J; Nijhuis-Van der Sanden, Maria W G

    2014-01-01

    This study investigated the added value, i.e. discriminative and concurrent validity and reproducibility, of an eye-hand coordination test relevant to table tennis as part of talent identification. Forty-three table tennis players (7-12 years) from national (n = 13), regional (n = 11) and local training centres (n = 19) participated. During the eye-hand coordination test, children needed to throw a ball against a vertical positioned table tennis table with one hand and to catch the ball correctly with the other hand as frequently as possible in 30 seconds. Four different test versions were assessed varying the distance to the table (1 or 2 meter) and using a tennis or table tennis ball. 'Within session' reproducibility was estimated for the two attempts of the initial tests and ten youngsters were retested after 4 weeks to estimate 'between sessions' reproducibility. Validity analyses using age as covariate showed that players from the national and regional centres scored significantly higher than players from the local centre in all test versions (ptalent identification appears to be the version with a table tennis ball at 1 meter regarding the psychometric characteristics evaluated. Longitudinal studies are necessary to evaluate the predictive value of this test.

  8. Dry eye syndrome

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/000426.htm Dry eye syndrome To use the sharing features on this page, ... second-hand smoke exposure Cold or allergy medicines Dry eye can also be caused by: Heat or ... Symptoms may include: Blurred vision Burning, itching, ...

  9. Does training with beat gestures favour children's narrative discourse abilities?

    OpenAIRE

    Vilà Giménez, Ingrid

    2016-01-01

    There is consensus evidence that gestures and prosody are important precursors of children’s early language abilities and development. Previous literature has investigated the beneficial role of beat gestures in the recall of information by preschoolers (Igualada, Esteve-Gibert, & Prieto, under review; Austin & Sweller, 2014). However, to our knowledge, little is known about whether the use of beat gestures can promote children’s later linguistic abilities and specifically whether training wi...

  10. Gesture, Landscape and Embrace: A Phenomenological Analysis of ...

    African Journals Online (AJOL)

    The 'radical reflection' on the 'flesh of the world' to which this analysis aspires in turn bears upon the general field of gestural reciprocities and connections, providing the insight that intimate gestures of the flesh, such as the embrace, are primordial attunements, motions of rhythm and reciprocity, that emanate from the world ...

  11. Gestures as Semiotic Resources in the Mathematics Classroom

    Science.gov (United States)

    Arzarello, Ferdinando; Paola, Domingo; Robutti, Ornella; Sabena, Cristina

    2009-01-01

    In this paper, we consider gestures as part of the resources activated in the mathematics classroom: speech, inscriptions, artifacts, etc. As such, gestures are seen as one of the semiotic tools used by students and teacher in mathematics teaching-learning. To analyze them, we introduce a suitable model, the "semiotic bundle." It allows focusing…

  12. Distinguishing the processing of gestures from signs in deaf individuals: an fMRI study.

    Science.gov (United States)

    Husain, Fatima T; Patkin, Debra J; Thai-Van, Hung; Braun, Allen R; Horwitz, Barry

    2009-06-18

    Manual gestures occur on a continuum from co-speech gesticulations to conventionalized emblems to language signs. Our goal in the present study was to understand the neural bases of the processing of gestures along such a continuum. We studied four types of gestures, varying along linguistic and semantic dimensions: linguistic and meaningful American Sign Language (ASL), non-meaningful pseudo-ASL, meaningful emblematic, and nonlinguistic, non-meaningful made-up gestures. Pre-lingually deaf, native signers of ASL participated in the fMRI study and performed two tasks while viewing videos of the gestures: a visuo-spatial (identity) discrimination task and a category discrimination task. We found that the categorization task activated left ventral middle and inferior frontal gyrus, among other regions, to a greater extent compared to the visual discrimination task, supporting the idea of semantic-level processing of the gestures. The reverse contrast resulted in enhanced activity of bilateral intraparietal sulcus, supporting the idea of featural-level processing (analogous to phonological-level processing of speech sounds) of the gestures. Regardless of the task, we found that brain activation patterns for the nonlinguistic, non-meaningful gestures were the most different compared to the ASL gestures. The activation patterns for the emblems were most similar to those of the ASL gestures and those of the pseudo-ASL were most similar to the nonlinguistic, non-meaningful gestures. The fMRI results provide partial support for the conceptualization of different gestures as belonging to a continuum and the variance in the fMRI results was best explained by differences in the processing of gestures along the semantic dimension.

  13. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    Science.gov (United States)

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  14. Does brain injury impair speech and gesture differently?

    Directory of Open Access Journals (Sweden)

    Tilbe Göksun

    2016-09-01

    Full Text Available People often use spontaneous gestures when talking about space, such as when giving directions. In a recent study from our lab, we examined whether focal brain-injured individuals’ naming motion event components of manner and path (represented in English by verbs and prepositions, respectively are impaired selectively, and whether gestures compensate for impairment in speech. Left or right hemisphere damaged patients and elderly control participants were asked to describe motion events (e.g., walking around depicted in brief videos. Results suggest that producing verbs and prepositions can be separately impaired in the left hemisphere and gesture production compensates for naming impairments when damage involves specific areas in the left temporal cortex.

  15. Iconic Gestures Facilitate Discourse Comprehension in Individuals With Superior Immediate Memory for Body Configurations.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2015-11-01

    To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.

  16. The Hands with Eyes and Nose in the Palm: As Effective Communication Alternatives for Profoundly Deaf People in Zimbabwe

    Science.gov (United States)

    Mutswanga, Phillipa

    2017-01-01

    Drawing from the experiences and testimonies of people with profound deafness, the study qualitatively explored the use of the hands with eyes and nose in the palm as communication alternatives in the field of deafness. The study was prompted by the 27 year old lady, Leah Katz-Hernandez who is deaf who got engaged in March 2015 as the 2016…

  17. Reduction in gesture during the production of repeated references

    NARCIS (Netherlands)

    Hoetjes, M.W.; Koolen, R.M.F.; Goudbeek, M.B.; Krahmer, E.J.; Swerts, M.G.J.

    2015-01-01

    In dialogue, repeated references contain fewer words (which are also acoustically reduced) and fewer gestures than initial ones. In this paper, we describe three experiments studying to what extent gesture reduction is comparable to other forms of linguistic reduction. Since previous studies showed

  18. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia

    Science.gov (United States)

    Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-01-01

    Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed. PMID:28609510

  19. The importance of gestural communication: a study of human-dog communication using incongruent information.

    Science.gov (United States)

    D'Aniello, Biagio; Scandurra, Anna; Alterisio, Alessandra; Valsecchi, Paola; Prato-Previde, Emanuela

    2016-11-01

    We assessed how water rescue dogs, which were equally accustomed to respond to gestural and verbal requests, weighted gestural versus verbal information when asked by their owner to perform an action. Dogs were asked to perform four different actions ("sit", "lie down", "stay", "come") providing them with a single source of information (in Phase 1, gestural, and in Phase 2, verbal) or with incongruent information (in Phase 3, gestural and verbal commands referred to two different actions). In Phases 1 and 2, we recorded the frequency of correct responses as 0 or 1, whereas in Phase 3, we computed a 'preference index' (percentage of gestural commands followed over the total commands responded). Results showed that dogs followed gestures significantly better than words when these two types of information were used separately. Females were more likely to respond to gestural than verbal commands and males responded to verbal commands significantly better than females. In the incongruent condition, when gestures and words simultaneously indicated two different actions, the dogs overall preferred to execute the action required by the gesture rather than that required verbally, except when the verbal command "come" was paired with the gestural command "stay" with the owner moving away from the dog. Our data suggest that in dogs accustomed to respond to both gestural and verbal requests, gestures are more salient than words. However, dogs' responses appeared to be dependent also on the contextual situation: dogs' motivation to maintain proximity with an owner who was moving away could have led them to make the more 'convenient' choices between the two incongruent instructions.

  20. Moment Invariant Features Extraction for Hand Gesture Recognition of Sign Language based on SIBI

    Directory of Open Access Journals (Sweden)

    Angga Rahagiyanto

    2017-07-01

    Full Text Available Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results.

  1. A Hierarchical Model for Continuous Gesture Recognition Using Kinect

    DEFF Research Database (Denmark)

    Jensen, Søren Kejser; Moesgaard, Christoffer; Nielsen, Christoffer Samuel

    2013-01-01

    Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results are unf...

  2. Effects of Repetitive Transcranial Magnetic Stimulation in Performing Eye-Hand Integration Tasks: Four Preliminary Studies with Children Showing Low-Functioning Autism

    Science.gov (United States)

    Panerai, Simonetta; Tasca, Domenica; Lanuzza, Bartolo; Trubia, Grazia; Ferri, Raffaele; Musso, Sabrina; Alagona, Giovanna; Di Guardo, Giuseppe; Barone, Concetta; Gaglione, Maria P.; Elia, Maurizio

    2014-01-01

    This report, based on four studies with children with low-functioning autism, aimed at evaluating the effects of repetitive transcranial magnetic stimulation delivered on the left and right premotor cortices on eye-hand integration tasks; defining the long-lasting effects of high-frequency repetitive transcranial magnetic stimulation; and…

  3. Pantomimes are special gestures which rely on working memory.

    Science.gov (United States)

    Bartolo, A; Cubelli, R; Della Sala, S; Drei, S

    2003-12-01

    The case of a patient is reported who presented consistently with overt deficits in producing pantomimes in the absence of any other deficits in producing meaningful gestures. This pattern of spared and impaired abilities is difficult to reconcile with the current layout of cognitive models for praxis. This patient also showed clear impairment in a dual-task paradigm, a test taxing the co-ordination aspect of working memory, though performed normally in a series of other neuropsychological measures assessing language, visuo-spatial functions, reasoning function, and executive function. A specific working memory impairment associated with a deficit of pantomiming in the absence of any other disorders in the production of meaningful gestures suggested a way to modify the model to account for the data. Pantomimes are a particular category of gestures, meaningful, yet novel. We posit that by their very nature they call for the intervention of a mechanism to integrate and synthesise perceptual inputs together with information made available from the action semantics (knowledge about objects and functions) and the output lexicon (stored procedural programmes). This processing stage conceived as a temporary workspace where gesture information is actively manipulated, would generate new motor programmes to carry out pantomimes. The model of gesture production is refined to include this workspace.

  4. Relating Gestures and Speech: An analysis of students' conceptions about geological sedimentary processes

    Science.gov (United States)

    Herrera, Juan Sebastian; Riggs, Eric M.

    2013-08-01

    Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture (e.g. giving directions, or describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image schemas as a source of concept representations for students' learning of sedimentary processes. A hermeneutical approach enabled us to access student meaning-making from students' verbal reports and gestures about four core geological ideas that involve sea-level change and sediment deposition. The study included 25 students from three US universities. Participants were enrolled in upper-level undergraduate courses on sedimentology and stratigraphy. We used semi-structured interviews for data collection. Our gesture coding focused on three types of gestures: deictic, iconic, and metaphoric. From analysis of video recorded interviews, we interpreted image schemas in gestures and verbal reports. Results suggested that students attempted to make more iconic and metaphoric gestures when dealing with abstract concepts, such as relative sea level, base level, and unconformities. Based on the analysis of gestures that recreated certain patterns including time, strata, and sea-level fluctuations, we reasoned that proper representational gestures may indicate completeness in conceptual understanding. We concluded that students rely on image schemas to develop ideas about complex sedimentary systems. Our research also supports the hypothesis that gestures provide an independent and non-linguistic indicator of image schemas that shape conceptual development, and also play a role in the construction and communication of complex spatial and temporal concepts in the geosciences.

  5. The benefit of gestures during communication: evidence from hearing and hearing-impaired individuals.

    Science.gov (United States)

    Obermeier, Christian; Dolk, Thomas; Gunter, Thomas C

    2012-07-01

    There is no doubt that gestures are communicative and can be integrated online with speech. Little is known, however, about the nature of this process, for example, its automaticity and how our own communicative abilities and also our environment influence the integration of gesture and speech. In two Event Related Potential (ERP) experiments, the effects of gestures during speech comprehension were explored. In both experiments, participants performed a shallow task thereby avoiding explicit gesture-speech integration. In the first experiment, participants with normal hearing viewed videos in which a gesturing actress uttered sentences which were either embedded in multi-speaker babble noise or not. The sentences contained a homonym which was disambiguated by the information in a gesture, which was presented asynchronous to speech (1000 msec earlier). Downstream, the sentence contained a target word that was either related to the dominant or subordinate meaning of the homonym and was used to indicate the success of the disambiguation. Both the homonym and the target word position showed clear ERP evidence of gesture-speech integration and disambiguation only under babble noise. Thus, during noise, gestures were taken into account as an important communicative cue. In Experiment 2, the same asynchronous stimuli were presented to a group of hearing-impaired students and age-matched controls. Only the hearing-impaired individuals showed significant speech-gesture integration and successful disambiguation at the target word. The age-matched controls did not show any effect. Thus, individuals who chronically experience suboptimal communicative situations in daily life automatically take gestures into account. The data from both experiments indicate that gestures are beneficial in countering difficult communication conditions independent of whether the difficulties are due to external (babble noise) or internal (hearing impairment) factors. Copyright © 2011 Elsevier

  6. Hand-eye coordination of a robot for the automatic inspection of steam-generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, D.H.; Song, Y.C.; Kim, J.H.; Kim, J.G.

    2004-01-01

    The inspection of steam-generator tubes in nuclear power plants needs to collect test signals in a highly radiated region that is not accessible by humans. In general, a robot equipped with a camera and a test probe is used to handle such a dangerous environment. The robot moves the probe to right below a tube to be inspected and then the probe is inserted into the tube. The inspection signals are acquired while the probe is pulling back. Currently, an operator in a control room controls all the process remotely. To make a fully automatic inspection system, first of all, a control mechanism is needed to position the probe to the proper location. This is so called a hand-eye coordination problem. In this paper, a hand-eye coordination method for a robot has been presented. The proposed method consists of the two consecutive control modes: rough positioning and fine-tuning. The rough positioning controller tries to position its probe near a target place using kinematics information and the known environments, and then the fine-tuning controller tries to adjust the probe to the target using the image acquired by the camera attached to the robot. The usefulness of the proposed method has been tested and verified through experiments. (orig.)

  7. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  8. Gestural Communication in Children with Autism Spectrum Disorders during Mother-Child Interaction

    Science.gov (United States)

    Mastrogiuseppe, Marilina; Capirci, Olga; Cuva, Simone; Venuti, Paola

    2015-01-01

    Children with autism spectrum disorders display atypical development of gesture production, and gesture impairment is one of the determining factors of autism spectrum disorder diagnosis. Despite the obvious importance of this issue for children with autism spectrum disorder, the literature on gestures in autism is scarce and contradictory. The…

  9. Traveller: An Interactive Cultural Training System Controlled by User-Defined Body Gestures

    NARCIS (Netherlands)

    Kistler, F.; André, E.; Mascarenhas, S.; Silva, A.; Paiva, A.; Degens, D.M.; Hofstede, G.J.; Krumhuber, E.; Kappas, A.; Aylett, R.

    2013-01-01

    In this paper, we describe a cultural training system based on an interactive storytelling approach and a culturally-adaptive agent architecture, for which a user-defined gesture set was created. 251 full body gestures by 22 users were analyzed to find intuitive gestures for the in-game actions in

  10. Communicative Effectiveness of Pantomime Gesture in People with Aphasia

    Science.gov (United States)

    Rose, Miranda L.; Mok, Zaneta; Sekine, Kazuki

    2017-01-01

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous conversational speech and gesture occurs across all cultures and age groups. When verbal communication is compromised, more of the communicative load can be transferred to the gesture modality. Although people with aphasia produce meaning-laden…

  11. Individual differences in frequency and saliency of speech-accompanying gestures: the role of cognitive abilities and empathy.

    Science.gov (United States)

    Chu, Mingyuan; Meyer, Antje; Foulkes, Lucy; Kita, Sotaro

    2014-04-01

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals' cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency.

  12. Methodological Reflections on Gesture Analysis in Second Language Acquisition and Bilingualism Research

    Science.gov (United States)

    Gullberg, Marianne

    2010-01-01

    Gestures, i.e. the symbolic movements that speakers perform while they speak, form a closely interconnected system with speech, where gestures serve both addressee-directed ("communicative") and speaker-directed ("internal") functions. This article aims (1) to show that a combined analysis of gesture and speech offers new ways to address…

  13. Gaze Interactive Building Instructions

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Ahmed, Zaheer; Mardanbeigi, Diako

    We combine eye tracking technology and mobile tablets to support hands-free interaction with digital building instructions. As a proof-of-concept we have developed a small interactive 3D environment where one can interact with digital blocks by gaze, keystroke and head gestures. Blocks may be moved...

  14. Body in Mind: How Gestures Empower Foreign Language Learning

    Science.gov (United States)

    Macedonia, Manuela; Knosche, Thomas R.

    2011-01-01

    It has previously been demonstrated that enactment (i.e., performing representative gestures during encoding) enhances memory for concrete words, in particular action words. Here, we investigate the impact of enactment on abstract word learning in a foreign language. We further ask if learning novel words with gestures facilitates sentence…

  15. View Invariant Gesture Recognition using 3D Motion Primitives

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method for automatic recognition of human gestures. The method works with 3D image data from a range camera to achieve invariance to viewpoint. The recognition is based solely on motion from characteristic instances of the gestures. These instances are denoted 3D motion...

  16. Individual differences in frequency and saliency of speech-accompanying gestures : the role of cognitive abilities and empathy

    OpenAIRE

    Chu, Mingyuan; Meyer, Antje; Foulkes, Lucy; Kita, Sotaro

    2014-01-01

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. ...

  17. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    Science.gov (United States)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  18. Better together: Simultaneous presentation of speech and gesture in math instruction supports generalization and retention.

    Science.gov (United States)

    Congdon, Eliza L; Novack, Miriam A; Brooks, Neon; Hemani-Lopez, Naureen; O'Keefe, Lucy; Goldin-Meadow, Susan

    2017-08-01

    When teachers gesture during instruction, children retain and generalize what they are taught (Goldin-Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it involves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3 rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized.

  19. Semantic brain areas are involved in gesture comprehension: An electrical neuroimaging study.

    Science.gov (United States)

    Proverbio, Alice Mado; Gabaro, Veronica; Orlandi, Andrea; Zani, Alberto

    2015-08-01

    While the mechanism of sign language comprehension in deaf people has been widely investigated, little is known about the neural underpinnings of spontaneous gesture comprehension in healthy speakers. Bioelectrical responses to 800 pictures of actors showing common Italian gestures (e.g., emblems, deictic or iconic gestures) were recorded in 14 persons. Stimuli were selected from a wider corpus of 1122 gestures. Half of the pictures were preceded by an incongruent description. ERPs were recorded from 128 sites while participants decided whether the stimulus was congruent. Congruent pictures elicited a posterior P300 followed by late positivity, while incongruent gestures elicited an anterior N400 response. N400 generators were investigated with swLORETA reconstruction. Processing of congruent gestures activated face- and body-related visual areas (e.g., BA19, BA37, BA22), the left angular gyrus, mirror fronto/parietal areas. The incongruent-congruent contrast particularly stimulated linguistic and semantic brain areas, such as the left medial and the superior temporal lobe. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications

    Science.gov (United States)

    Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad

    2017-08-01

    One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.

  1. Probing the Mental Representation of Gesture: Is Handwaving Spatial?

    Science.gov (United States)

    Wagner, Susan M.; Nusbaum, Howard; Goldin-Meadow, Susan

    2004-01-01

    What type of mental representation underlies the gestures that accompany speech? We used a dual-task paradigm to compare the demands gesturing makes on visuospatial and verbal working memories. Participants in one group remembered a string of letters (verbal working memory group) and those in a second group remembered a visual grid pattern…

  2. An investigation of the use of co-verbal gestures in oral discourse among Chinese speakers with fluent versus non-fluent aphasia and healthy adults

    Directory of Open Access Journals (Sweden)

    Anthony Pak Hin Kong

    2015-04-01

    also administered tests that assessed their verbal and non-verbal semantic skills, oral naming abilities, aphasia syndromes and severities, and degree of hemiplegia. Results For Aim 1, results of Kruskal-Wallis tests revealed that the gesture-to-word ratio was significantly different across speaker groups, H(2 = 20.13, p < 0.001. Post-hoc analyses using Mann-Whitney tests revealed a significantly higher ratio in the non-fluent aphasic group (mean = 0.27, SD = 0.23, as compared to the fluent PWAs (mean = 0.10, SD = 0.12 and controls (mean = 0.04, SD = 0.05. Concerning Aim 2, deictic gestures were the most frequently used form of content-carrying co-verbal gestures by PWA and controls. Emblems, in contrast, were used the least. About half of the gestures employed did not serve a specific communication function in our speakers, but a higher proportion of the remaining half was used by controls to enhance the speech content. Both fluent and non-fluent PWAs, on the other hand, tended to use gestures to enhance the speech content and assist their lexical retrieval. As for Aim 3, results of a multiple regression suggested that PWAs’ discourse performance was significantly related to gesture to word ratio, F(1,39 = 12.955, p < 0.01. Specifically, percentage of complete sentences and percentage of dysfluency significantly accounted for 24.9% and 14.2% of variance, respectively.

  3. No One Even Has Eyes: The Decline of Hand-Painted Graphics in Mumbai

    Directory of Open Access Journals (Sweden)

    Aaron Fine

    2013-05-01

    Full Text Available In this work of creative non-fiction, accompanied by coloring book plates of his own design, the author explores recent changes in Indian visual culture. An investigation of hand painted political graphics in Mumbai revealed very little painting and a great deal about the rapidly advancing digitalization of visual space in India. As idiosyncratic and individual creative efforts are replaced by mass-produced digital printing in what ways are India’s political networks enhanced; In what ways are India’s creative networks destroyed? Translators, police officers, political activists, and artists are presented through the eyes of an outsider whose own expectations about creative expression and political participation are challenged. The conclusion considers how once recycled visual culture artifacts are now junk destined for the landfill, and urges readers to color-in the whitewashed spaces of the city.

  4. Eye movement training is most effective when it involves a task-relevant sensorimotor decision.

    Science.gov (United States)

    Fooken, Jolande; Lalonde, Kathryn M; Mann, Gurkiran K; Spering, Miriam

    2018-04-01

    Eye and hand movements are closely linked when performing everyday actions. We conducted a perceptual-motor training study to investigate mutually beneficial effects of eye and hand movements, asking whether training in one modality benefits performance in the other. Observers had to predict the future trajectory of a briefly presented moving object, and intercept it at its assumed location as accurately as possible with their finger. Eye and hand movements were recorded simultaneously. Different training protocols either included eye movements or a combination of eye and hand movements with or without external performance feedback. Eye movement training did not transfer across modalities: Irrespective of feedback, finger interception accuracy and precision improved after training that involved the hand, but not after isolated eye movement training. Conversely, eye movements benefited from hand movement training or when external performance feedback was given, thus improving only when an active interceptive task component was involved. These findings indicate only limited transfer across modalities. However, they reveal the importance of creating a training task with an active sensorimotor decision to improve the accuracy and precision of eye and hand movements.

  5. Diagram, Gesture, Agency: Theorizing Embodiment in the Mathematics Classroom

    Science.gov (United States)

    de Freitas, Elizabeth; Sinclair, Nathalie

    2012-01-01

    In this paper, we use the work of philosopher Gilles Chatelet to rethink the gesture/diagram relationship and to explore the ways mathematical agency is constituted through it. We argue for a fundamental philosophical shift to better conceptualize the relationship between gesture and diagram, and suggest that such an approach might open up new…

  6. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    Science.gov (United States)

    Küssner, Mats B.; Tidhar, Dan; Prior, Helen M.; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided. PMID:25120506

  7. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    Directory of Open Access Journals (Sweden)

    Mats B. Küssner

    2014-07-01

    Full Text Available Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked sixty-four musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesised musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy.Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space and tempo vs. speed (increasing tempo leading to increasing speed of hand movement associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e. rising-falling pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement, highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied music cognition. Implications for theoretical refinements and potential clinical applications are provided.

  8. The Different Patterns of Gesture between Genders in Mathematical Problem Solving of Geometry

    Science.gov (United States)

    Harisman, Y.; Noto, M. S.; Bakar, M. T.; Amam, A.

    2017-02-01

    This article discusses about students’ gesture between genders in answering problems of geometry. Gesture aims to check students’ understanding which is undefined from their writings. This study is a qualitative research, there were seven questions given to two students of eight grade Junior High School who had the equal ability. The data of this study were collected from mathematical problem solving test, videoing students’ presentation, and interviewing students by asking questions to check their understandings in geometry problems, in this case the researchers would observe the students’ gesture. The result of this study revealed that there were patterns of gesture through students’ conversation and prosodic cues, such as tones, intonation, speech rate and pause. Female students tended to give indecisive gestures, for instance bowing, hesitating, embarrassing, nodding many times in shifting cognitive comprehension, forwarding their body and asking questions to the interviewer when they found tough questions. However, male students acted some gestures such as playing their fingers, focusing on questions, taking longer time to answer hard questions, staying calm in shifting cognitive comprehension. We suggest to observe more sample and focus on students’ gesture consistency in showing their understanding to solve the given problems.

  9. Touch and You’re Trapp(cked: Quantifying the Uniqueness of Touch Gestures for Tracking

    Directory of Open Access Journals (Sweden)

    Masood Rahat

    2018-04-01

    Full Text Available We argue that touch-based gestures on touch-screen devices enable the threat of a form of persistent and ubiquitous tracking which we call touch-based tracking. Touch-based tracking goes beyond the tracking of virtual identities and has the potential for cross-device tracking as well as identifying multiple users using the same device. We demonstrate the likelihood of touch-based tracking by focusing on touch gestures widely used to interact with touch devices such as swipes and taps.. Our objective is to quantify and measure the information carried by touch-based gestures which may lead to tracking users. For this purpose, we develop an information theoretic method that measures the amount of information about users leaked by gestures when modelled as feature vectors. Our methodology allows us to evaluate the information leaked by individual features of gestures, samples of gestures, as well as samples of combinations of gestures. Through our purpose-built app, called TouchTrack, we gather gesture samples from 89 users, and demonstrate that touch gestures contain sufficient information to uniquely identify and track users. Our results show that writing samples (on a touch pad can reveal 73.7% of information (when measured in bits, and left swipes can reveal up to 68.6% of information. Combining different combinations of gestures results in higher uniqueness, with the combination of keystrokes, swipes and writing revealing up to 98.5% of information about users. We further show that, through our methodology, we can correctly re-identify returning users with a success rate of more than 90%.

  10. Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers

    Science.gov (United States)

    2016-07-01

    application is that of the Kinect system, where camera-based interpretation of user body posture and movements serves to control videogame features...modalities will benefit from respective advantages. The combination of deictic gestures to support human-human interactions has been well...used in spontaneous gesture production were to clarify speech utterances. Studies have shown demonstrable benefits from use of gestures to support

  11. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  12. Embedding gesture recognition into airplane seats for in-flight entertainment

    NARCIS (Netherlands)

    van de Westelaken, H.F.M.; Hu, J.; Liu, H.; Rauterberg, G.W.M.

    2011-01-01

    In order to reduce both psychological and physical stress in air travel, sensors are integrated into airplane seats to detect gestures as input for in-flight entertainment systems. The content provided by the entertainment systems helps to reduce psychological stress, and gesture recognition is used

  13. Play-solicitation gestures in chimpanzees in the wild: flexible adjustment to social circumstances and individual matrices.

    Science.gov (United States)

    Fröhlich, Marlen; Wittig, Roman M; Pika, Simone

    2016-08-01

    Social play is a frequent behaviour in great apes and involves sophisticated forms of communicative exchange. While it is well established that great apes test and practise the majority of their gestural signals during play interactions, the influence of demographic factors and kin relationships between the interactants on the form and variability of gestures are relatively little understood. We thus carried out the first systematic study on the exchange of play-soliciting gestures in two chimpanzee ( Pan troglodytes ) communities of different subspecies. We examined the influence of age, sex and kin relationships of the play partners on gestural play solicitations, including object-associated and self-handicapping gestures. Our results demonstrated that the usage of (i) audible and visual gestures increased significantly with infant age, (ii) tactile gestures differed between the sexes, and (iii) audible and visual gestures were higher in interactions with conspecifics than with mothers. Object-associated and self-handicapping gestures were frequently used to initiate play with same-aged and younger play partners, respectively. Our study thus strengthens the view that gestures are mutually constructed communicative means, which are flexibly adjusted to social circumstances and individual matrices of interactants.

  14. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  15. Natural user interface as a supplement of the holographic Raman tweezers

    Science.gov (United States)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  16. Integrating gesture recognition in airplane seats for in-flight entertainment

    NARCIS (Netherlands)

    van de Westelaken, H.F.M.; Hu, J.; Liu, H.; Rauterberg, G.W.M.; Pan, Z.; Zhang, X.; El Rhalibi, A.; Woo, W.; Li, Y.

    2008-01-01

    In order to reduce both the psychological and physical stress in air travel, sensors are integrated in airplane seats to detect the gestures as input for in-flight entertainment systems. The content provided by the entertainment systems helps to reduce the psychological stress, and the gesture

  17. Do you see what I mean? Corticospinal excitability during observation of culture-specific gestures.

    Directory of Open Access Journals (Sweden)

    Istvan Molnar-Szakacs

    2007-07-01

    Full Text Available People all over the world use their hands to communicate expressively. Autonomous gestures, also known as emblems, are highly social in nature, and convey conventionalized meaning without accompanying speech. To study the neural bases of cross-cultural social communication, we used single pulse transcranial magnetic stimulation (TMS to measure corticospinal excitability (CSE during observation of culture-specific emblems. Foreign Nicaraguan and familiar American emblems as well as meaningless control gestures were performed by both a Euro-American and a Nicaraguan actor. Euro-American participants demonstrated higher CSE during observation of the American compared to the Nicaraguan actor. This motor resonance phenomenon may reflect ethnic and cultural ingroup familiarity effects. However, participants also demonstrated a nearly significant (p = 0.053 actor by emblem interaction whereby both Nicaraguan and American emblems performed by the American actor elicited similar CSE, whereas Nicaraguan emblems performed by the Nicaraguan actor yielded higher CSE than American emblems. The latter result cannot be interpreted simply as an effect of ethnic ingroup familiarity. Thus, a likely explanation of these findings is that motor resonance is modulated by interacting biological and cultural factors.

  18. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.

    Science.gov (United States)

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti

    2016-10-04

    Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology

  19. A prelinguistic gestural universal of human communication.

    Science.gov (United States)

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10-14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants' pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers' and infants' pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication. Copyright © 2012 Cognitive Science Society, Inc.

  20. Gestures Towards the Digital Maypole

    Directory of Open Access Journals (Sweden)

    Christian McRea

    2005-01-01

    Full Text Available To paraphrase Blanchot: We are not learned; we are not ignorant. We have known joys. That is saying too little: We are alive, and this life gives us the greatest pleasure. The intensities afforded by mobile communication can be thought of as an extension of the styles and gestures already materialised by multiple maypole cultures, pre-digital community forms and the very clustered natures of speech and being. In his Critique of Judgment, Kant argues that the information selection process at the heart of communication is one of the fundamental activities of any aesthetically produced knowledge form. From this radial point, "Gestures Towards The Digital Maypole" begins the process of reorganising conceptions of modalities of communication around the absent centre and the affective realms that form through the movement of information-energy, like sugar in a hurricane.

  1. Cross-Cultural Transfer in Gesture Frequency in Chinese-English Bilinguals

    Science.gov (United States)

    So, Wing Chee

    2010-01-01

    The purpose of this paper is to examine cross-cultural differences in gesture frequency and the extent to which exposure to two cultures would affect the gesture frequency of bilinguals when speaking in both languages. The Chinese-speaking monolinguals from China, English-speaking monolinguals from America, and Chinese-English bilinguals from…

  2. Does gesture add to the comprehensibility of people with aphasia?

    NARCIS (Netherlands)

    van Nispen, Karin; Sekine, Kazuki; Rose, Miranda; Ferré, Gaëlle; Tutton, Mark

    2015-01-01

    Gesture can convey information co-occurring with and in the absence of speech. As such, it seems a useful strategy for people with aphasia (PWA) to compensate for their impaired speech. To find out whether gestures used by PWA add to the comprehensibility of their communication we looked at the

  3. The Sony PlayStation II EyeToy: low-cost virtual reality for use in rehabilitation.

    Science.gov (United States)

    Rand, Debbie; Kizony, Rachel; Weiss, Patrice Tamar L

    2008-12-01

    The objective of this study was to investigate the potential of using a low-cost video-capture virtual reality (VR) platform, the Sony PlayStation II EyeToy, for the rehabilitation of older adults with disabilities. This article presents three studies that were carried out to provide information about the EyeToy's potential for use in rehabilitation. The first study included the testing of healthy young adults (N = 34) and compared their experiences using the EyeToy with those using GestureTek's IREX VR system in terms of a sense of presence, level of enjoyment, control, success, and perceived exertion. The second study aimed to characterize the VR experience of healthy older adults (N = 10) and to determine the suitability and usability of the EyeToy for this population and the third study aimed to determine the feasibility of the EyeToy for use by individuals (N = 12) with stroke at different stages. The implications of these three studies for applying the system to rehabilitation are discussed.

  4. On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.

    Science.gov (United States)

    Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando

    2017-08-01

    Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. The Effects of Prohibiting Gestures on Children's Lexical Retrieval Ability

    Science.gov (United States)

    Pine, Karen J.; Bird, Hannah; Kirk, Elizabeth

    2007-01-01

    Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis (Kita, 2000) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis (Rauscher, Krauss & Chen, 1996)…

  6. Gestures and metaphors as indicators of conceptual understanding of sedimentary systems

    Science.gov (United States)

    Riggs, E. M.; Herrera, J. S.

    2012-12-01

    Understanding the geometry and evolution of sedimentary systems and sequence stratigraphy is crucial to the development of geoscientists and engineers working in the petroleum industry. There is a wide variety of audiences within industry who require relatively advanced instruction in this area of geoscience, and there is an equally wide array of approaches to teaching this material in the classroom and field. This research was undertaken to develop a clearer picture of how conceptual understanding in this area of sedimentary geology grows as a result of instruction and how instructors can monitor the completeness and accuracy of student thinking and mental models. We sought ways to assess understanding that did not rely on model-specific jargon but rather was based in physical expression of basic processes and attributes of sedimentary systems. Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture, (e.g. giving directions, describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image-schemas as a source of concept representation for students' learning of sedimentary processes. In order to explore image schemas that lie in student explanations, we focused our analysis on four core ideas about sedimentary systems that involve sea level change and sediment deposition, namely relative sea level, base level, and sea-level fluctuations and resulting basin geometry and sediment deposition changes. The study included 25 students from three U.S. Midwestern universities. Undergraduate and graduate-level participants were enrolled in senior-level undergraduate courses in sedimentology and stratigraphy. We used semi-structured interviews and videotaping for data collection. We coded the data to focus on deictic, iconic, and metaphoric gestures, and coded interview transcripts for linguistic metaphors using the

  7. Evaluation of the safety and usability of touch gestures in operating in-vehicle information systems with visual occlusion.

    Science.gov (United States)

    Kim, Huhn; Song, Haewon

    2014-05-01

    Nowadays, many automobile manufacturers are interested in applying the touch gestures that are used in smart phones to operate their in-vehicle information systems (IVISs). In this study, an experiment was performed to verify the applicability of touch gestures in the operation of IVISs from the viewpoints of both driving safety and usability. In the experiment, two devices were used: one was the Apple iPad, with which various touch gestures such as flicking, panning, and pinching were enabled; the other was the SK EnNavi, which only allowed tapping touch gestures. The participants performed the touch operations using the two devices under visually occluded situations, which is a well-known technique for estimating load of visual attention while driving. In scrolling through a list, the flicking gestures required more time than the tapping gestures. Interestingly, both the flicking and simple tapping gestures required slightly higher visual attention. In moving a map, the average time taken per operation and the visual attention load required for the panning gestures did not differ from those of the simple tapping gestures that are used in existing car navigation systems. In zooming in/out of a map, the average time taken per pinching gesture was similar to that of the tapping gesture but required higher visual attention. Moreover, pinching gestures at a display angle of 75° required that the participants severely bend their wrists. Because the display angles of many car navigation systems tends to be more than 75°, pinching gestures can cause severe fatigue on users' wrists. Furthermore, contrary to participants' evaluation of other gestures, several participants answered that the pinching gesture was not necessary when operating IVISs. It was found that the panning gesture is the only touch gesture that can be used without negative consequences when operating IVISs while driving. The flicking gesture is likely to be used if the screen moving speed is slower or

  8. Real-Time Hand Position Sensing Technology Based on Human Body Electrostatics

    Directory of Open Access Journals (Sweden)

    Kai Tang

    2018-05-01

    Full Text Available Non-contact human-computer interactions (HCI based on hand gestures have been widely investigated. Here, we present a novel method to locate the real-time position of the hand using the electrostatics of the human body. This method has many advantages, including a delay of less than one millisecond, low cost, and does not require a camera or wearable devices. A formula is first created to sense array signals with five spherical electrodes. Next, a solving algorithm for the real-time measured hand position is introduced and solving equations for three-dimensional coordinates of hand position are obtained. A non-contact real-time hand position sensing system was established to perform verification experiments, and the principle error of the algorithm and the systematic noise were also analyzed. The results show that this novel technology can determine the dynamic parameters of hand movements with good robustness to meet the requirements of complicated HCI.

  9. Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren

    2016-01-01

    Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…

  10. Effects of gestures on older adults' learning from video-based models

    NARCIS (Netherlands)

    Ouwehand, Kim; van Gog, Tamara|info:eu-repo/dai/nl/294304975; Paas, Fred

    2015-01-01

    This study investigated whether the positive effects of gestures on learning by decreasing working memory load, found in children and young adults, also apply to older adults, who might especially benefit from gestures given memory deficits associated with aging. Participants learned a

  11. Intraspecific gestural laterality in chimpanzees and gorillas and the impact of social propensities.

    Science.gov (United States)

    Prieur, Jacques; Pika, Simone; Barbu, Stéphanie; Blois-Heulin, Catherine

    2017-09-01

    A relevant approach to address the mechanisms underlying the emergence of the right-handedness/left-hemisphere language specialization of humans is to investigate both proximal and distal causes of language lateralization through the study of non-human primates' gestural laterality. We carried out the first systematic, quantitative comparison of within-subjects' and between-species' laterality by focusing on the laterality of intraspecific gestures of chimpanzees (Pan troglodytes) and gorillas (Gorilla gorilla) living in six different captive groups. We addressed the following two questions: (1) Do chimpanzees and gorillas exhibit stable direction of laterality when producing different types of gestures at the individual level? If yes, is it related to the strength of laterality? (2) Is there a species difference in gestural laterality at the population level? If yes, which factors could explain this difference? During 1356 observation hours, we recorded 42335 cases of dyadic gesture use in the six groups totalling 39 chimpanzees and 35 gorillas. Results showed that both species could exhibit either stability or flexibility in their direction of gestural laterality. These results suggest that both stability and flexibility may have differently modulated the strength of laterality depending on the species social structure and dynamics. Furthermore, a multifactorial analysis indicates that these particular social components may have specifically impacted gestural laterality through the influence of gesture sensory modality and the position of the recipient in the signaller's visual field during interaction. Our findings provide further support to the social theory of laterality origins proposing that social pressures may have shaped laterality through natural selection. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The role of gestures in achieving understanding in Early English teaching in Denmark

    DEFF Research Database (Denmark)

    aus der Wieschen, Maria Vanessa; Eskildsen, Søren Wind

    school in Denmark. The use of multimodal resources employed by teachers in foreign language classrooms has been studied by e.g. Muramuto (1999), Lazaraton (2004), Taleghani-Nikazm (2008), Eskildsen & Wagner (2013), Sert (2015). This research has established gestures as a pervasive phenomenon in language...... brings this established agreement on the importance of gestures in classroom interaction to bear on early foreign language learning: Whereas prior work on gestures in L2 classrooms has predominantly dealt with adult L2 learners, this paper investigates the extent to which a teacher makes use of gestures...... in early child foreign language teaching. Using multimodal conversation analysis of three hours of classroom instruction in a Danish primary school, we uncover how a teacher uses gestures to enhance the comprehension of his L2 talk when teaching English in the 1st and 3rd grade, both of which are beginning...

  13. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    Science.gov (United States)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  14. Crossover learning of gestures in two ideomotor apraxia patients: A single case experimental design study.

    Science.gov (United States)

    Shimizu, Daisuke; Tanemura, Rumi

    2017-06-01

    Crossover learning may aid rehabilitation in patients with neurological disorders. Ideomotor apraxia (IMA) is a common sequela of left-brain damage that comprises a deficit in the ability to perform gestures to verbal commands or by imitation. This study elucidated whether crossover learning occurred in two post-stroke IMA patients without motor paralysis after gesture training approximately 2 months after stroke onset. We quantitatively analysed the therapeutic intervention history and investigated whether revised action occurred during gesture production. Treatment intervention was to examine how to influence improvement and generalisation of the ability to produce the gesture. This study used an alternating treatments single-subject design, and the intervention method was errorless learning. Results indicated crossover learning in both patients. Qualitative analysis indicated that revised action occurred during the gesture-production process in one patient and that there were two types of post-revised action gestures: correct and incorrect gestures. We also discovered that even when a comparably short time had elapsed since stroke onset, generalisation was difficult. Information transfer between the left and right hemispheres of the brain via commissural fibres is important in crossover learning. In conclusion, improvements in gesture-production skill should be made with reference to the left cerebral hemisphere disconnection hypothesis.

  15. The brain's dorsal route for speech represents word meaning: evidence from gesture.

    Science.gov (United States)

    Josse, Goulven; Joseph, Sabine; Bertasi, Eric; Giraud, Anne-Lise

    2012-01-01

    The dual-route model of speech processing includes a dorsal stream that maps auditory to motor features at the sublexical level rather than at the lexico-semantic level. However, the literature on gesture is an invitation to revise this model because it suggests that the premotor cortex of the dorsal route is a major site of lexico-semantic interaction. Here we investigated lexico-semantic mapping using word-gesture pairs that were either congruent or incongruent. Using fMRI-adaptation in 28 subjects, we found that temporo-parietal and premotor activity during auditory processing of single action words was modulated by the prior audiovisual context in which the words had been repeated. The BOLD signal was suppressed following repetition of the auditory word alone, and further suppressed following repetition of the word accompanied by a congruent gesture (e.g. ["grasp" + grasping gesture]). Conversely, repetition suppression was not observed when the same action word was accompanied by an incongruent gesture (e.g. ["grasp" + sprinkle]). We propose a simple model to explain these results: auditory and visual information converge onto premotor cortex where it is represented in a comparable format to determine (in)congruence between speech and gesture. This ability of the dorsal route to detect audiovisual semantic (in)congruence suggests that its function is not restricted to the sublexical level.

  16. Foreign Object in the Eye: First Aid

    Science.gov (United States)

    ... eye: First aid Foreign object in the eye: First aid By Mayo Clinic Staff If you get a foreign object in your eye Wash your hands ... et al., eds. American Medical Association Handbook of First Aid and Emergency Care. New York, N.Y.: Random ...

  17. Car Gestures - Advisory warning using additional steering wheel angles.

    Science.gov (United States)

    Maag, Christian; Schneider, Norbert; Lübbeke, Thomas; Weisswange, Thomas H; Goerick, Christian

    2015-10-01

    Advisory warning systems (AWS) notify the driver about upcoming hazards. This is in contrast to the majority of currently deployed advanced driver assistance systems (ADAS) that manage emergency situations. The target of this study is to investigate the effectiveness, acceptance, and controllability of a specific kind of AWS that uses the haptic information channel for warning the driver. This could be beneficial, as alternatives for using the visual modality can help to reduce the risk of visual overload. The driving simulator study (N=24) compared an AWS based on additional steering wheel angle control (Car Gestures) with a visual warning presented in a simulated head-up display (HUD). Both types of warning were activated 3.5s before the hazard object was reached. An additional condition of unassisted driving completed the experimental design. The subjects encountered potential hazards in a variety of urban situations (e.g. a pedestrian standing on the curbs). For the investigated situations, subjective ratings show that a majority of drivers prefer visual warnings over haptic information via gestures. An analysis of driving behavior indicates that both warning approaches guide the vehicle away from the potential hazard. Whereas gestures lead to a faster lateral driving reaction (compared to HUD warnings), the visual warnings result in a greater safety benefit (measured by the minimum distance to the hazard object). A controllability study with gestures in the wrong direction (i.e. leading toward the hazard object) shows that drivers are able to cope with wrong haptic warnings and safety is not reduced compared to unassisted driving as well as compared to (correct) haptic gestures and visual warnings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy

    OpenAIRE

    Shin, Ji-won; Song, Gui-bin; Hwangbo, Gak

    2015-01-01

    [Purpose] The purpose of the study was to evaluate the effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy. [Subjects] Sixteen children (9 males, 7 females) with spastic diplegic cerebral palsy were recruited and randomly assigned to the conventional neurological physical therapy group (CG) and virtual reality training group (VRG). [Methods] Eight children in the control group performed 45 minutes of th...

  19. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  20. On the road to a neuroprosthetic hand: a novel hand grasp orthosis based on functional electrical stimulation.

    Science.gov (United States)

    Leeb, Robert; Gubler, Miguel; Tavella, Michele; Miller, Heather; Del Millan, Jose R

    2010-01-01

    To patients who have lost the functionality of their hands as a result of a severe spinal cord injury or brain stroke, the development of new techniques for grasping is indispensable for reintegration and independency in daily life. Functional Electrical Stimulation (FES) of residual muscles can reproduce the most dominant grasping tasks and can be initialized by brain signals. However, due to the very complex hand anatomy and current limitations in FES-technology with surface electrodes, these grasp patterns cannot be smoothly executed. In this paper, we present an adaptable passive hand orthosis which is capable of producing natural and smooth movements when coupled with FES. It evenly synchronizes the grasping movements and applied forces on all fingers, allowing for naturalistic gestures and functional grasps of everyday objects. The orthosis is also equipped with a lock, which allows it to remain in the desired position without the need for long-term stimulation. Furthermore, we quantify improvements offered by the orthosis compare them with natural grasps on healthy subjects.

  1. Comparison of gesture and conventional interaction techniques for interventional neuroradiology.

    Science.gov (United States)

    Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian

    2017-09-01

    Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.

  2. Remembering what was said and done: The activation and facilitation of memory for gesture as a consequence of retrieval.

    Science.gov (United States)

    Overoye, Acacia L; Storm, Benjamin C

    2018-04-26

    The gestures that occur alongside speech provide listeners with cues that both improve and alter memory for speech. The present research investigated the interplay of gesture and speech by examining the influence of retrieval on memory for gesture. In three experiments, participants watched video clips of an actor speaking a series of statements with or without gesture before being asked to retrieve the speech portions of half of those statements. Participants were then tested on their ability to recall whether the actor had gestured during each statement and, if so, to recall the nature of the gesture that was produced. Results indicated that attempting to retrieve the speech portion of the statements enhanced participants' ability to remember the gesture portion of the statements. This result was only observed, however, for representational gestures when the speech and gesture components were meaningfully related (Experiments 1 & 2). It was not observed for beat gestures or nonsense gestures (Experiments 2 & 3). These results are consistent with the idea that gestures can be coactivated during the retrieval of speech and that such coactivation is due to the integrated representation of speech and gesture in memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. A comparison of sung and spoken phonation onset gestures using high-speed digital imaging.

    Science.gov (United States)

    Freeman, Ena; Woo, Peak; Saxman, John H; Murry, Thomas

    2012-03-01

    Phonation onset is important in the maintenance of healthy vocal production for speech and singing. The purpose of this preliminary study was to examine differences in vocal fold vibratory behavior between sung and spoken phonation onset gestures. Given the greater degree of precision required for the abrupt onset sung gestures, we hypothesize that differences exist in the timing and coordination of the vocal fold adductory gesture with the onset of vocal fold vibration. Staccato and German (a modified glottal plosive, so named for its occurrence in German classical singing) onset gestures were compared with breathy, normal, and hard onset gestures, using high-speed digital imaging. Samples were obtained from two subjects with no history of voice disorders (a female trained singer and a male nonsinger). Simultaneous capture of acoustical data confirmed the distinction among gestures. Image data were compared for glottal area configurations, degree of adductory positioning, number of small-amplitude prephonatory oscillations (PPOs), and timing of onset gesture events, the latter marked by maximum vocal fold abduction, maximum adduction, beginning of PPOs, and beginning of steady-state oscillation. Results reveal closer adductory positioning of the vocal folds for the staccato and German gestures. The data also suggest a direct relationship between the degree of adductory positioning and the number of PPOs. Results for the timing of onset gesture events suggest a relationship between discrete adductory positioning and more evenly spaced PPOs. By contrast, the overlapping of prephonatory adductory positioning with vibration onset revealed more unevenly spaced PPOs. This may support an existing hypothesis that less well-defined boundaries interfere with normal modes of vibration of the vocal fold tissue. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  4. The impact of iconic gestures on foreign language word learning and its neural substrate.

    Science.gov (United States)

    Macedonia, Manuela; Müller, Karsten; Friederici, Angela D

    2011-06-01

    Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics. Copyright © 2010 Wiley-Liss, Inc.

  5. The impact of autism spectrum disorder symptoms on gesture use in fragile X syndrome and Down syndrome

    Directory of Open Access Journals (Sweden)

    Emily Lorang

    2017-12-01

    Full Text Available Background & aims This study compared gesture rate and purpose in participants with Down syndrome and fragile X syndrome, and the impact of autism spectrum disorder symptoms on each syndrome. Methods Twenty individuals with fragile X syndrome and 20 individuals with Down syndrome between nine and 22 years of age participated in this study. We coded gesture rate and purpose from an autism spectrum disorder evaluation, the Autism Diagnostic Observation Schedule – Second Edition. Results We did not find between-group differences (Down syndrome compared to fragile X syndrome in gesture rate or purpose. Notably, as autism spectrum disorder symptoms increased, the group with Down syndrome produced a lower rate of gestures, but used gestures for the same purpose. Gesture rate did not change based on autism spectrum disorder symptoms in the participants with fragile X syndrome, but as autism spectrum disorder symptoms increased, the participants with fragile X syndrome produced a larger proportion of gestures to regulate behavior and a smaller proportion for joint attention/social interaction. Conclusions Overall, the amount or purpose of gestures did not differentiate individuals with Down syndrome and fragile X syndrome. However, the presence of autism spectrum disorder symptoms had a significant and unique impact on these genetic disorders. In individuals with Down syndrome, the presence of more autism spectrum disorder symptoms resulted in a reduction in the rate of gesturing, but did not change the purpose. However, in fragile X syndrome, the rate of gestures remained the same, but the purpose of those gestures changed based on autism spectrum disorder symptoms. Implications Autism spectrum disorder symptoms differentially impact gestures in Down syndrome and fragile X syndrome. Individuals with Down syndrome and more autism spectrum disorder symptoms are using gestures less frequently. Therefore, clinicians may need to consider children with

  6. Gesture and Identity in the Teaching and Learning of Italian

    Science.gov (United States)

    Peltier, Ilaria Nardotto; McCafferty, Steven G.

    2010-01-01

    This study investigated the use of mimetic gestures of identity by foreign language teachers of Italian and their students in college classes as a form of meaning-making. All four of the teachers were found to use a variety of Italian gestures as a regular aspect of their teaching and presentation of self. Students and teachers also were found to…

  7. Domestic Dogs Use Contextual Information and Tone of Voice when following a Human Pointing Gesture

    NARCIS (Netherlands)

    Scheider, Linda; Grassmann, Susanne; Kaminski, Juliane; Tomasello, Michael

    2011-01-01

    Domestic dogs are skillful at using the human pointing gesture. In this study we investigated whether dogs take contextual information into account when following pointing gestures, specifically, whether they follow human pointing gestures more readily in the context in which food has been found

  8. Toward an Epistemology of the Hand

    DEFF Research Database (Denmark)

    Brinkmann, Svend; Tanggaard, Lene

    2010-01-01

    ‘epistemology of the eye' has been at work, which has had significant practical implications, not least in educational contexts. One way to characterize John Dewey's pragmatism is to see it as an attempt to replace the epistemology of the eye with an epistemology of the hand. This article develops...

  9. Magic Ring: A Finger-Worn Device for Multiple Appliances Control Using Static Finger Gestures

    Directory of Open Access Journals (Sweden)

    Tongjun Huang

    2012-05-01

    Full Text Available An ultimate goal for Ubiquitous Computing is to enable people to interact with the surrounding electrical devices using their habitual body gestures as they communicate with each other. The feasibility of such an idea is demonstrated through a wearable gestural device named Magic Ring (MR, which is an original compact wireless sensing mote in a ring shape that can recognize various finger gestures. A scenario of wireless multiple appliances control is selected as a case study to evaluate the usability of such a gestural interface. Experiments comparing the MR and a Remote Controller (RC were performed to evaluate the usability. From the results, only with 10 minutes practice, the proposed paradigm of gestural-based control can achieve a performance of completing about six tasks per minute, which is in the same level of the RC-based method.

  10. Automatic imitation of pro- and antisocial gestures: Is implicit social behavior censored?

    Science.gov (United States)

    Cracco, Emiel; Genschow, Oliver; Radkova, Ina; Brass, Marcel

    2018-01-01

    According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface

    Directory of Open Access Journals (Sweden)

    Grazia Cicirelli

    2015-03-01

    Full Text Available In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.

  12. Gesture language use in natural UI: pen-based sketching in conceptual design

    Science.gov (United States)

    Ma, Cuixia; Dai, Guozhong

    2003-04-01

    Natural User Interface is one of the important next generation interactions. Computers are not just the tools of many special people or areas but for most people. Ubiquitous computing makes the world magic and more comfortable. In the design domain, current systems, which need the detail information, cannot conveniently support the conceptual design of the early phrase. Pen and paper are the natural and simple tools to use in our daily life, especially in design domain. Gestures are the useful and natural mode in the interaction of pen-based. In natural UI, gestures can be introduced and used through the similar mode to the existing resources in interaction. But the gestures always are defined beforehand without the users' intention and recognized to represent something in certain applications without being transplanted to others. We provide the gesture description language (GDL) to try to cite the useful gestures to the applications conveniently. It can be used in terms of the independent control resource such as menus or icons in applications. So we give the idea from two perspectives: one from the application-dependent point of view and the other from the application-independent point of view.

  13. Hospitable Gestures in the University Lecture: Analysing Derrida's Pedagogy

    Science.gov (United States)

    Ruitenberg, Claudia

    2014-01-01

    Based on archival research, this article analyses the pedagogical gestures in Derrida's (largely unpublished) lectures on hospitality (1995/96), with particular attention to the enactment of hospitality in these gestures. The motivation for this analysis is twofold. First, since the large-group university lecture has been widely critiqued as…

  14. Role of maternal gesture use in speech use by children with fragile X syndrome.

    Science.gov (United States)

    Hahn, Laura J; Zimmer, B Jean; Brady, Nancy C; Swinburne Romine, Rebecca E; Fleming, Kandace K

    2014-05-01

    The purpose of this study was to investigate how maternal gesture relates to speech production by children with fragile X syndrome (FXS). Participants were 27 young children with FXS (23 boys, 4 girls) and their mothers. Videotaped home observations were conducted between the ages of 25 and 37 months (toddler period) and again between the ages of 60 and 71 months (child period). The videos were later coded for types of maternal utterances and maternal gestures that preceded child speech productions. Children were also assessed with the Mullen Scales of Early Learning at both ages. Maternal gesture use in the toddler period was positively related to expressive language scores at both age periods and was related to receptive language scores in the child period. Maternal proximal pointing, in comparison to other gestures, evoked more speech responses from children during the mother-child interactions, particularly when combined with wh-questions. This study adds to the growing body of research on the importance of contextual variables, such as maternal gestures, in child language development. Parental gesture use may be an easily added ingredient to parent-focused early language intervention programs.

  15. Gesture recognition for an exergame prototype

    NARCIS (Netherlands)

    Gacem, Brahim; Vergouw, Robert; Verbiest, Harm; Cicek, Emrullah; Kröse, Ben; van Oosterhout, Tim; Bakkes, S.C.J.

    2011-01-01

    We will demonstrate a prototype exergame aimed at the serious domain of elderly fitness. The exergame incorporates straightforward means to gesture recognition, and utilises a Kinect camera to obtain 2.5D sensory data of the human user.

  16. Gestural apraxia.

    Science.gov (United States)

    Etcharry-Bouyx, F; Le Gall, D; Jarry, C; Osiurak, F

    Gestural apraxia was first described in 1905 by Hugo Karl Liepmann. While his description is still used, the actual terms are often confusing. The cognitive approach using models proposes thinking of the condition in terms of production and conceptual knowledge. The underlying cognitive processes are still being debated, as are also the optimal ways to assess them. Several neuroimaging studies have revealed the involvement of a left-lateralized frontoparietal network, with preferential activation of the superior parietal lobe, intraparietal sulcus and inferior parietal cortex. The presence of apraxia after a stroke is prevalent, and the incidence is sufficient to propose rehabilitation. Copyright © 2017. Published by Elsevier Masson SAS.

  17. Gesture Recognition for Educational Games: Magic Touch Math

    Science.gov (United States)

    Kye, Neo Wen; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    Children nowadays are having problem learning and understanding basic mathematical operations because they are not interested in studying or learning mathematics. This project proposes an educational game called Magic Touch Math that focuses on basic mathematical operations targeted to children between the age of three to five years old using gesture recognition to interact with the game. Magic Touch Math was developed in accordance to the Game Development Life Cycle (GDLC) methodology. The prototype developed has helped children to learn basic mathematical operations via intuitive gestures. It is hoped that the application is able to get the children motivated and interested in mathematics.

  18. Working memory for meaningless manual gestures.

    Science.gov (United States)

    Rudner, Mary

    2015-03-01

    Effects on working memory performance relating to item similarity have been linked to prior categorisation of representations in long-term memory. However, there is evidence from gesture processing that this link may not be obligatory. The present study investigated whether working memory for incidentally generated meaningless manual gestures is influenced by formational similarity and whether this effect is modulated by working-memory load. Results showed that formational similarity did lower performance, demonstrating that similarity effects are not dependent on prior categorisation. However, this effect was only found when working-memory load was low, supporting a flexible resource allocation model according to which it is the quality rather than quantity of working memory representations that determines performance. This interpretation is in line with proposals suggesting language modality specific allocation of resources in working memory. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  19. Imitation and matching of meaningless gestures: distinct involvement from motor and visual imagery.

    Science.gov (United States)

    Lesourd, Mathieu; Navarro, Jordan; Baumard, Josselin; Jarry, Christophe; Le Gall, Didier; Osiurak, François

    2017-05-01

    The aim of the present study was to understand the underlying cognitive processes of imitation and matching of meaningless gestures. Neuropsychological evidence obtained in brain damaged patients, has shown that distinct cognitive processes supported imitation and matching of meaningless gestures. Left-brain damaged (LBD) patients failed to imitate while right-brain damaged (RBD) patients failed to match meaningless gestures. Moreover, other studies with brain damaged patients showed that LBD patients were impaired in motor imagery while RBD patients were impaired in visual imagery. Thus, we hypothesize that imitation of meaningless gestures might rely on motor imagery, whereas matching of meaningless gestures might be based on visual imagery. In a first experiment, using a correlational design, we demonstrated that posture imitation relies on motor imagery but not on visual imagery (Experiment 1a) and that posture matching relies on visual imagery but not on motor imagery (Experiment 1b). In a second experiment, by manipulating directly the body posture of the participants, we demonstrated that such manipulation evokes a difference only in imitation task but not in matching task. In conclusion, the present study provides direct evidence that the way we imitate or we have to compare postures depends on motor imagery or visual imagery, respectively. Our results are discussed in the light of recent findings about underlying mechanisms of meaningful and meaningless gestures.

  20. The Effect of Iconic and Beat Gestures on Memory Recall in Greek's First and Second Language

    OpenAIRE

    Eleni Ioanna Levantinou

    2016-01-01

    Gestures play a major role in comprehension and memory recall due to the fact that aid the efficient channel of the meaning and support listeners’ comprehension and memory. In the present study, the assistance of two kinds of gestures (iconic and beat gestures) is tested in regards to memory and recall. The hypothesis investigated here is whether or not iconic and beat gestures provide assistance in memory and recall in Greek and in Greek speakers’ second language. Two gr...