WorldWideScience

Sample records for hand motion recognition

  1. Hand Gesture Recognition with Leap Motion

    OpenAIRE

    Du, Youchen; Liu, Shenglan; Feng, Lin; Chen, Menghui; Wu, Jie

    2017-01-01

    The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly. This paper proposes a novel hand gesture recognition system with Leap Motion Controller. A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multi-class SVM classifier to recognize performed gesture, dimension reduction and feature weight...

  2. Hand based visual intent recognition algorithm for wheelchair motion

    CSIR Research Space (South Africa)

    Luhandjula, T

    2010-05-01

    Full Text Available This paper describes an algorithm for a visual human-machine interface that infers a person’s intention from the motion of the hand. Work in progress shows a proof of concept tested on static images. The context for which this solution is intended...

  3. Human motion sensing and recognition a fuzzy qualitative approach

    CERN Document Server

    Liu, Honghai; Ji, Xiaofei; Chan, Chee Seng; Khoury, Mehdi

    2017-01-01

    This book introduces readers to the latest exciting advances in human motion sensing and recognition, from the theoretical development of fuzzy approaches to their applications. The topics covered include human motion recognition in 2D and 3D, hand motion analysis with contact sensors, and vision-based view-invariant motion recognition, especially from the perspective of Fuzzy Qualitative techniques. With the rapid development of technologies in microelectronics, computers, networks, and robotics over the last decade, increasing attention has been focused on human motion sensing and recognition in many emerging and active disciplines where human motions need to be automatically tracked, analyzed or understood, such as smart surveillance, intelligent human-computer interaction, robot motion learning, and interactive gaming. Current challenges mainly stem from the dynamic environment, data multi-modality, uncertain sensory information, and real-time issues. These techniques are shown to effectively address the ...

  4. Motion Primitives for Action Recognition

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2007-01-01

    the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7% and 85.5%, respectively....

  5. Hand gesture recognition by analysis of codons

    Science.gov (United States)

    Ramachandra, Poornima; Shrikhande, Neelima

    2007-09-01

    The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.

  6. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  7. Action Recognition using Motion Primitives

    DEFF Research Database (Denmark)

    Moeslund, Thomas B.; Fihl, Preben; Holte, Michael Boelstoft

    the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognizing rates of 88.7% and 85.5%, respectively....

  8. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-04-01

    Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.

  9. Towards NIRS-based hand movement recognition.

    Science.gov (United States)

    Paleari, Marco; Luciani, Riccardo; Ariano, Paolo

    2017-07-01

    This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.

  10. An Analysis of Intrinsic and Extrinsic Hand Muscle EMG for Improved Pattern Recognition Control.

    Science.gov (United States)

    Adewuyi, Adenike A; Hargrove, Levi J; Kuiken, Todd A

    2016-04-01

    Pattern recognition control combined with surface electromyography (EMG) from the extrinsic hand muscles has shown great promise for control of multiple prosthetic functions for transradial amputees. There is, however, a need to adapt this control method when implemented for partial-hand amputees, who possess both a functional wrist and information-rich residual intrinsic hand muscles. We demonstrate that combining EMG data from both intrinsic and extrinsic hand muscles to classify hand grasps and finger motions allows up to 19 classes of hand grasps and individual finger motions to be decoded, with an accuracy of 96% for non-amputees and 85% for partial-hand amputees. We evaluated real-time pattern recognition control of three hand motions in seven different wrist positions. We found that a system trained with both intrinsic and extrinsic muscle EMG data, collected while statically and dynamically varying wrist position increased completion rates from 73% to 96% for partial-hand amputees and from 88% to 100% for non-amputees when compared to a system trained with only extrinsic muscle EMG data collected in a neutral wrist position. Our study shows that incorporating intrinsic muscle EMG data and wrist motion can significantly improve the robustness of pattern recognition control for application to partial-hand prosthetic control.

  11. Optimizing pattern recognition-based control for partial-hand prosthesis application.

    Science.gov (United States)

    Earley, Eric J; Adewuyi, Adenike A; Hargrove, Levi J

    2014-01-01

    Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system's ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (pgrasps available to the classifier significantly decrease classification error (pgrasp.

  12. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  13. Gait Recognition Using Wearable Motion Recording Sensors

    Directory of Open Access Journals (Sweden)

    Davrondzhon Gafurov

    2009-01-01

    Full Text Available This paper presents an alternative approach, where gait is collected by the sensors attached to the person's body. Such wearable sensors record motion (e.g. acceleration of the body parts during walking. The recorded motion signals are then investigated for person recognition purposes. We analyzed acceleration signals from the foot, hip, pocket and arm. Applying various methods, the best EER obtained for foot-, pocket-, arm- and hip- based user authentication were 5%, 7%, 10% and 13%, respectively. Furthermore, we present the results of our analysis on security assessment of gait. Studying gait-based user authentication (in case of hip motion under three attack scenarios, we revealed that a minimal effort mimicking does not help to improve the acceptance chances of impostors. However, impostors who know their closest person in the database or the genders of the users can be a threat to gait-based authentication. We also provide some new insights toward the uniqueness of gait in case of foot motion. In particular, we revealed the following: a sideway motion of the foot provides the most discrimination, compared to an up-down or forward-backward directions; and different segments of the gait cycle provide different level of discrimination.

  14. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-01-01

    estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized

  15. Finger tips detection for two handed gesture recognition

    Science.gov (United States)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  16. Sensing human hand motions for controlling dexterous robots

    Science.gov (United States)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  17. Hand-Geometry Recognition Based on Contour Parameters

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Bazen, A.M.; Booij, W.D.T.; Hendrikse, A.J.; Jain, A.K.; Ratha, N.K.

    This paper demonstrates the feasibility of a new method of hand-geometry recognition based on parameters derived from the contour of the hand. The contour is completely determined by the black-and-white image of the hand and can be derived from it by means of simple image-processing techniques. It

  18. Attention, biological motion, and action recognition.

    Science.gov (United States)

    Thompson, James; Parasuraman, Raja

    2012-01-02

    Interacting with others in the environment requires that we perceive and recognize their movements and actions. Neuroimaging and neuropsychological studies have indicated that a number of brain regions, particularly the superior temporal sulcus, are involved in a number of processes essential for action recognition, including the processing of biological motion and processing the intentions of actions. We review the behavioral and neuroimaging evidence suggesting that while some aspects of action recognition might be rapid and effective, they are not necessarily automatic. Attention is particularly important when visual information about actions is degraded or ambiguous, or if competing information is present. We present evidence indicating that neural responses associated with the processing of biological motion are strongly modulated by attention. In addition, behavioral and neuroimaging evidence shows that drawing inferences from the actions of others is attentionally demanding. The role of attention in action observation has implications for everyday social interactions and workplace applications that depend on observing, understanding and interpreting actions. Published by Elsevier Inc.

  19. Parametric Primitives for Hand Gesture Recognition

    DEFF Research Database (Denmark)

    Baby, Sanmohan; Krüger, Volker

    2009-01-01

    Imitation learning is considered to be an effective way of teaching humanoid robots and action recognition is the key step to imitation learning. In this paper  an online algorithm to recognize parametric actions with object context is presented. Objects are key instruments in understanding...

  20. Real-Time Hand Posture Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  1. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  2. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  3. View Invariant Gesture Recognition using 3D Motion Primitives

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method for automatic recognition of human gestures. The method works with 3D image data from a range camera to achieve invariance to viewpoint. The recognition is based solely on motion from characteristic instances of the gestures. These instances are denoted 3D motion...

  4. Hand Motion-Based Remote Control Interface with Vibrotactile Feedback for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2013-06-01

    Full Text Available This paper presents the design and implementation of a hand-held interface system for the locomotion control of home robots. A handheld controller is proposed to implement hand motion recognition and hand motion-based robot control. The handheld controller can provide a ‘connect-and-play’ service for the users to control the home robot with visual and vibrotactile feedback. Six natural hand gestures are defined for navigating the home robots. A three-axis accelerometer is used to detect the hand motions of the user. The recorded acceleration data are analysed and classified to corresponding control commands according to their characteristic curves. A vibration motor is used to provide vibrotactile feedback to the user when an improper operation is performed. The performances of the proposed hand motion-based interface and the traditional keyboard and mouse interface have been compared in robot navigation experiments. The experimental results of home robot navigation show that the success rate of the handheld controller is 13.33% higher than the PC based controller. The precision of the handheld controller is 15.4% more than that of the PC and the execution time is 24.7% less than the PC based controller. This means that the proposed hand motion-based interface is more efficient and flexible.

  5. Effects of Age and Gender on Hand Motion Tasks

    Directory of Open Access Journals (Sweden)

    Wing Lok Au

    2015-01-01

    Full Text Available Objective. Wearable and wireless motion sensor devices have facilitated the automated computation of speed, amplitude, and rhythm of hand motion tasks. The aim of this study is to determine if there are any biological influences on these kinematic parameters. Methods. 80 healthy subjects performed hand motion tasks twice for each hand, with movements measured using a wireless motion sensor device (Kinesia, Cleveland Medical Devices Inc., Cleveland, OH. Multivariate analyses were performed with age, gender, and height added into the model. Results. Older subjects performed poorer in finger tapping (FT speed (r=0.593, p<0.001, hand-grasp (HG speed (r=0.517, p<0.001, and pronation-supination (PS speed (r=0.485, p<0.001. Men performed better in FT rhythm p<0.02, HG speed p<0.02, HG amplitude p<0.02, and HG rhythm p<0.05. Taller subjects performed better in the speed and amplitude components of FT p<0.02 and HG tasks p<0.02. After multivariate analyses, only age and gender emerged as significant independent factors influencing the speed but not the amplitude and rhythm components of hand motion tasks. Gender exerted an independent influence only on HG speed, with better performance in men p<0.05. Conclusions. Age, gender, and height are not independent factors influencing the amplitude and rhythm components of hand motion tasks. The speed component is affected by age and gender differences.

  6. Electromyographic Grasp Recognition for a Five Fingered Robotic Hand

    Directory of Open Access Journals (Sweden)

    Nayan M. Kakoty

    2012-09-01

    Full Text Available This paper presents classification of grasp types based on surface electromyographic signals. Classification is through radial basis function kernel support vector machine using sum of wavelet decomposition coefficients of the EMG signals. In a study involving six subjects, we achieved an average recognition rate of 86%. The electromyographic grasp recognition together with a 8-bit microcontroller has been employed to control a fivefingered robotic hand to emulate six grasp types used during 70% daily living activities.

  7. Human motion retrieval from hand-drawn sketch.

    Science.gov (United States)

    Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee

    2012-05-01

    The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.

  8. Hand motion modeling for psychology analysis in job interview using optical flow-history motion image: OF-HMI

    Science.gov (United States)

    Khalifa, Intissar; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    To survive the competition, companies always think about having the best employees. The selection is depended on the answers to the questions of the interviewer and the behavior of the candidate during the interview session. The study of this behavior is always based on a psychological analysis of the movements accompanying the answers and discussions. Few techniques are proposed until today to analyze automatically candidate's non verbal behavior. This paper is a part of a work psychology recognition system; it concentrates in spontaneous hand gesture which is very significant in interviews according to psychologists. We propose motion history representation of hand based on an hybrid approach that merges optical flow and history motion images. The optical flow technique is used firstly to detect hand motions in each frame of a video sequence. Secondly, we use the history motion images (HMI) to accumulate the output of the optical flow in order to have finally a good representation of the hand`s local movement in a global temporal template.

  9. Threats of Password Pattern Leakage Using Smartwatch Motion Recognition Sensors

    Directory of Open Access Journals (Sweden)

    Jihun Kim

    2017-06-01

    Full Text Available Thanks to the development of Internet of Things (IoT technologies, wearable markets have been growing rapidly. Smartwatches can be said to be the most representative product in wearable markets, and involve various hardware technologies in order to overcome the limitations of small hardware. Motion recognition sensors are a representative example of those hardware technologies. However, smartwatches and motion recognition sensors that can be worn by users may pose security threats of password pattern leakage. In the present paper, passwords are inferred through experiments to obtain password patterns inputted by users using motion recognition sensors, and verification of the results and the accuracy of the results is shown.

  10. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  11. Hand biometric recognition based on fused hand geometry and vascular patterns.

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  12. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors

    NARCIS (Netherlands)

    Shoaib, M.; Bosch, S.; Durmaz, O.; Scholten, Johan; Havinga, Paul J.M.

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such

  13. Hand motion segmentation against skin colour background in breast awareness applications.

    Science.gov (United States)

    Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick

    2004-01-01

    Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.

  14. Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition.

    Science.gov (United States)

    Lu, Zhiyuan; Chen, Xiang; Zhang, Xu; Tong, Kay-Yu; Zhou, Ping

    2017-08-01

    Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.

  15. Action Recognition by Joint Spatial-Temporal Motion Feature

    Directory of Open Access Journals (Sweden)

    Weihua Zhang

    2013-01-01

    Full Text Available This paper introduces a method for human action recognition based on optical flow motion features extraction. Automatic spatial and temporal alignments are combined together in order to encourage the temporal consistence on each action by an enhanced dynamic time warping (DTW algorithm. At the same time, a fast method based on coarse-to-fine DTW constraint to improve computational performance without reducing accuracy is induced. The main contributions of this study include (1 a joint spatial-temporal multiresolution optical flow computation method which can keep encoding more informative motion information than recent proposed methods, (2 an enhanced DTW method to improve temporal consistence of motion in action recognition, and (3 coarse-to-fine DTW constraint on motion features pyramids to speed up recognition performance. Using this method, high recognition accuracy is achieved on different action databases like Weizmann database and KTH database.

  16. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  17. User-Independent Motion State Recognition Using Smartphone Sensors.

    Science.gov (United States)

    Gu, Fuqiang; Kealy, Allison; Khoshelham, Kourosh; Shang, Jianga

    2015-12-04

    The recognition of locomotion activities (e.g., walking, running, still) is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users' data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people's motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human's motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy.

  18. User-Independent Motion State Recognition Using Smartphone Sensors

    Directory of Open Access Journals (Sweden)

    Fuqiang Gu

    2015-12-01

    Full Text Available The recognition of locomotion activities (e.g., walking, running, still is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users’ data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people’s motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human’s motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy.

  19. Arm Motion Recognition and Exercise Coaching System for Remote Interaction

    Directory of Open Access Journals (Sweden)

    Hong Zeng

    2016-01-01

    Full Text Available Arm motion recognition and its related applications have become a promising human computer interaction modal due to the rapid integration of numerical sensors in modern mobile-phones. We implement a mobile-phone-based arm motion recognition and exercise coaching system that can help people carrying mobile-phones to do body exercising anywhere at any time, especially for the persons that have very limited spare time and are constantly traveling across cities. We first design improved k-means algorithm to cluster the collecting 3-axis acceleration and gyroscope data of person actions into basic motions. A learning method based on Hidden Markov Model is then designed to classify and recognize continuous arm motions of both learners and coaches, which also measures the action similarities between the persons. We implement the system on MIUI 2S mobile-phone and evaluate the system performance and its accuracy of recognition.

  20. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    Science.gov (United States)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  1. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  2. A natural approach to convey numerical digits using hand activity recognition based on hand shape features

    Science.gov (United States)

    Chidananda, H.; Reddy, T. Hanumantha

    2017-06-01

    This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.

  3. Hand gesture recognition in confined spaces with partial observability and occultation constraints

    Science.gov (United States)

    Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.

  4. Action Recognition in Semi-synthetic Images using Motion Primitives

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence...... motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string...... containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data...

  5. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    He-Yuan Lin

    2008-03-01

    Full Text Available A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  6. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Li Hsin-Te

    2008-01-01

    Full Text Available Abstract A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  7. Human Hand Motion Analysis and Synthesis of Optimal Power Grasps for a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Francesca Cordella

    2014-03-01

    Full Text Available Biologically inspired robotic systems can find important applications in biomedical robotics, since studying and replicating human behaviour can provide new insights into motor recovery, functional substitution and human-robot interaction. The analysis of human hand motion is essential for collecting information about human hand movements useful for generalizing reaching and grasping actions on a robotic system. This paper focuses on the definition and extraction of quantitative indicators for describing optimal hand grasping postures and replicating them on an anthropomorphic robotic hand. A motion analysis has been carried out on six healthy human subjects performing a transverse volar grasp. The extracted indicators point to invariant grasping behaviours between the involved subjects, thus providing some constraints for identifying the optimal grasping configuration. Hence, an optimization algorithm based on the Nelder-Mead simplex method has been developed for determining the optimal grasp configuration of a robotic hand, grounded on the aforementioned constraints. It is characterized by a reduced computational cost. The grasp stability has been tested by introducing a quality index that satisfies the form-closure property. The grasping strategy has been validated by means of simulation tests and experimental trials on an arm-hand robotic system. The obtained results have shown the effectiveness of the extracted indicators to reduce the non-linear optimization problem complexity and lead to the synthesis of a grasping posture able to replicate the human behaviour while ensuring grasp stability. The experimental results have also highlighted the limitations of the adopted robotic platform (mainly due to the mechanical structure to achieve the optimal grasp configuration.

  8. Changing predictions, stable recognition: Children's representations of downward incline motion.

    Science.gov (United States)

    Hast, Michael; Howe, Christine

    2017-11-01

    Various studies to-date have demonstrated children hold ill-conceived expressed beliefs about the physical world such as that one ball will fall faster than another because it is heavier. At the same time, they also demonstrate accurate recognition of dynamic events. How these representations relate is still unresolved. This study examined 5- to 11-year-olds' (N = 130) predictions and recognition of motion down inclines. Predictions were typically in error, matching previous work, but children largely recognized correct events as correct and rejected incorrect ones. The results also demonstrate while predictions change with increasing age, recognition shows signs of stability. The findings provide further support for a hybrid model of object representations and argue in favour of stable core cognition existing alongside developmental changes. Statement of contribution What is already known on this subject? Children's predictions of physical events show limitations in accuracy Their recognition of such events suggests children may use different knowledge sources in their reasoning What the present study adds? Predictions fluctuate more strongly than recognition, suggesting stable core cognition But recognition also shows some fluctuation, arguing for a hybrid model of knowledge representation. © 2017 The British Psychological Society.

  9. Identification of motion from multi-channel EMG signals for control of prosthetic hand

    International Nuclear Information System (INIS)

    Geethanjali, P.; Ray, K.K.

    2011-01-01

    Full text: The authors in this paper propose an effective and efficient pattern recognition technique from four channel electromyogram (EMG) signals for control of multifunction prosthetic hand. Time domain features such as mean absolute value, number of zero crossings, number of slope sign changes and waveform length are considered for pattern recognition. The patterns are classified using simple logistic regression (SLR) technique and decision tree (DT) using J48 algorithm. In this study six specific hand and wrist motions are identified from the EMG signals obtained from ten different able-bodied. By considering relevant dominant features for pattern recognition, the processing time as well as memory space of the SLR and DT classifiers is found to be less in comparison with neural network (NN), k-nearest neighbour model 1 (kNN Model-1), k-nearest neighbour model 2 (kNN-Model-2) and linear discriminant analysis. The classification accuracy of SLR classifier is found to be 91 ± 1.9%. (author)

  10. Robotic Hand-Assisted Training for Spinal Cord Injury Driven by Myoelectric Pattern Recognition: A Case Report.

    Science.gov (United States)

    Lu, Zhiyuan; Tong, Kai-Yu; Shin, Henry; Stampas, Argyrios; Zhou, Ping

    2017-10-01

    A 51-year-old man with an incomplete C6 spinal cord injury sustained 26 yrs ago attended twenty 2-hr visits over 10 wks for robot-assisted hand training driven by myoelectric pattern recognition. In each visit, his right hand was assisted to perform motions by an exoskeleton robot, while the robot was triggered by his own motion intentions. The hand robot was designed for this study, which can perform six kinds of motions, including hand closing/opening; thumb, index finger, and middle finger closing/opening; and middle, ring, and little fingers closing/opening. After the training, his grip force increased from 13.5 to 19.6 kg, his pinch force remained the same (5.0 kg), his score of Box and Block test increased from 32 to 39, and his score from the Graded Redefined Assessment of Strength, Sensibility, and Prehension test Part 4.B increased from 22 to 24. He accomplished the tasks in the Graded Redefined Assessment of Strength, Sensibility, and Prehension test Part 4.B 28.8% faster on average. The results demonstrate the feasibility and effectiveness of robot-assisted training driven by myoelectric pattern recognition after spinal cord injury.

  11. Emotion Recognition in Face and Body Motion in Bulimia Nervosa.

    Science.gov (United States)

    Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate

    2017-11-01

    Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  12. Motion Primitives and Probabilistic Edit Distance for Action Recognition

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2009-01-01

    the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7% and 85.5%, respectively....

  13. A Self-Powered Insole for Human Motion Recognition

    Directory of Open Access Journals (Sweden)

    Yingzhou Han

    2016-09-01

    Full Text Available Biomechanical energy harvesting is a feasible solution for powering wearable sensors by directly driving electronics or acting as wearable self-powered sensors. A wearable insole that not only can harvest energy from foot pressure during walking but also can serve as a self-powered human motion recognition sensor is reported. The insole is designed as a sandwich structure consisting of two wavy silica gel film separated by a flexible piezoelectric foil stave, which has higher performance compared with conventional piezoelectric harvesters with cantilever structure. The energy harvesting insole is capable of driving some common electronics by scavenging energy from human walking. Moreover, it can be used to recognize human motion as the waveforms it generates change when people are in different locomotion modes. It is demonstrated that different types of human motion such as walking and running are clearly classified by the insole without any external power source. This work not only expands the applications of piezoelectric energy harvesters for wearable power supplies and self-powered sensors, but also provides possible approaches for wearable self-powered human motion monitoring that is of great importance in many fields such as rehabilitation and sports science.

  14. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  15. Computational intelligence in multi-feature visual pattern recognition hand posture and face recognition using biologically inspired approaches

    CERN Document Server

    Pisharady, Pramod Kumar; Poh, Loh Ai

    2014-01-01

    This book presents a collection of computational intelligence algorithms that addresses issues in visual pattern recognition such as high computational complexity, abundance of pattern features, sensitivity to size and shape variations and poor performance against complex backgrounds. The book has 3 parts. Part 1 describes various research issues in the field with a survey of the related literature. Part 2 presents computational intelligence based algorithms for feature selection and classification. The algorithms are discriminative and fast. The main application area considered is hand posture recognition. The book also discusses utility of these algorithms in other visual as well as non-visual pattern recognition tasks including face recognition, general object recognition and cancer / tumor classification. Part 3 presents biologically inspired algorithms for feature extraction. The visual cortex model based features discussed have invariance with respect to appearance and size of the hand, and provide good...

  16. Development of virtual reality exercise of hand motion assist robot for rehabilitation therapy by patient self-motion control.

    Science.gov (United States)

    Ueki, Satoshi; Nishimoto, Yutaka; Abe, Motoyuki; Kawasaki, Haruhisa; Ito, Satoshi; Ishigure, Yasuhiko; Mizumoto, Jun; Ojika, Takeo

    2008-01-01

    This paper presents a virtual reality-enhanced hand rehabilitation support system with a symmetric master-slave motion assistant for independent rehabilitation therapies. Our aim is to provide fine motion exercise for a hand and fingers, which allows the impaired hand of a patient to be driven by his or her healthy hand on the opposite side. Since most disabilities caused by cerebral vascular accidents or bone fractures are hemiplegic, we adopted a symmetric master-slave motion assistant system in which the impaired hand is driven by the healthy hand on the opposite side. A VR environment displaying an effective exercise was created in consideration of system's characteristic. To verify the effectiveness of this system, a clinical test was executed by applying to six patients.

  17. Identification of strong earthquake ground motion by using pattern recognition

    International Nuclear Information System (INIS)

    Suzuki, Kohei; Tozawa, Shoji; Temmyo, Yoshiharu.

    1983-01-01

    The method of grasping adequately the technological features of complex waveform of earthquake ground motion and utilizing them as the input to structural systems has been proposed by many researchers, and the method of making artificial earthquake waves to be used for the aseismatic design of nuclear facilities has not been established in the unified form. In this research, earthquake ground motion was treated as an irregular process with unsteady amplitude and frequency, and the running power spectral density was expressed as a dark and light image on a plane of the orthogonal coordinate system with both time and frequency axes. The method of classifying this image into a number of technologically important categories by pattern recognition was proposed. This method is based on the concept called compound similarity method in the image technology, entirely different from voice diagnosis, and it has the feature that the result of identification can be quantitatively evaluated by the analysis of correlation of spatial images. Next, the standard pattern model of the simulated running power spectral density corresponding to the representative classification categories was proposed. Finally, the method of making unsteady simulated earthquake motion was shown. (Kako, I.)

  18. "Like the palm of my hands": Motor imagery enhances implicit and explicit visual recognition of one's own hands.

    Science.gov (United States)

    Conson, Massimiliano; Volpicella, Francesco; De Bellis, Francesco; Orefice, Agnese; Trojano, Luigi

    2017-10-01

    A key point in motor imagery literature is that judging hands in palm view recruits sensory-motor information to a higher extent than judging hands in back view, due to the greater biomechanical complexity implied in rotating hands depicted from palm than from back. We took advantage from this solid evidence to test the nature of a phenomenon known as self-advantage, i.e. the advantage in implicitly recognizing self vs. others' hand images. The self-advantage has been actually found when implicitly but not explicitly judging self-hands, likely due to dissociation between implicit and explicit body representations. However, such a finding might be related to the extent to which motor imagery is recruited during implicit and explicit processing of hand images. We tested this hypothesis in two behavioural experiments. In Experiment 1, right-handed participants judged laterality of either self or others' hands, whereas in Experiment 2, an explicit recognition of one's own hands was required. Crucially, in both experiments participants were randomly presented with hand images viewed from back or from palm. The main result of both experiments was the self-advantage when participants judged hands from palm view. This novel finding demonstrate that increasing the "motor imagery load" during processing of self vs. others' hands can elicit a self-advantage in explicit recognition tasks as well. Future studies testing the possible dissociation between implicit and explicit visual body representations should take into account the modulatory effect of motor imagery load on self-hand processing. Copyright © 2017. Published by Elsevier B.V.

  19. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors.

    Science.gov (United States)

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2016-03-24

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.

  20. An Approach for Pattern Recognition of EEG Applied in Prosthetic Hand Drive

    Directory of Open Access Journals (Sweden)

    Xiao-Dong Zhang

    2011-12-01

    Full Text Available For controlling the prosthetic hand by only electroencephalogram (EEG, it has become the hot spot in robotics research to set up a direct communication and control channel between human brain and prosthetic hand. In this paper, the EEG signal is analyzed based on multi-complicated hand activities. And then, two methods of EEG pattern recognition are investigated, a neural prosthesis hand system driven by BCI is set up, which can complete four kinds of actions (arm’s free state, arm movement, hand crawl, hand open. Through several times of off-line and on-line experiments, the result shows that the neural prosthesis hand system driven by BCI is reasonable and feasible, the C-support vector classifiers-based method is better than BP neural network on the EEG pattern recognition for multi-complicated hand activities.

  1. Human Action Recognition Using Ordinal Measure of Accumulated Motion

    Directory of Open Access Journals (Sweden)

    Kim Wonjun

    2010-01-01

    Full Text Available This paper presents a method for recognizing human actions from a single query action video. We propose an action recognition scheme based on the ordinal measure of accumulated motion, which is robust to variations of appearances. To this end, we first define the accumulated motion image (AMI using image differences. Then the AMI of the query action video is resized to a subimage by intensity averaging and a rank matrix is generated by ordering the sample values in the sub-image. By computing the distances from the rank matrix of the query action video to the rank matrices of all local windows in the target video, local windows close to the query action are detected as candidates. To find the best match among the candidates, their energy histograms, which are obtained by projecting AMI values in horizontal and vertical directions, respectively, are compared with those of the query action video. The proposed method does not require any preprocessing task such as learning and segmentation. To justify the efficiency and robustness of our approach, the experiments are conducted on various datasets.

  2. The contribution of the body and motion to whole person recognition.

    Science.gov (United States)

    Simhi, Noa; Yovel, Galit

    2016-05-01

    While the importance of faces in person recognition has been the subject of many studies, there are relatively few studies examining recognition of the whole person in motion even though this most closely resembles daily experience. Most studies examining the whole body in motion use point light displays, which have many advantages but are impoverished and unnatural compared to real life. To determine which factors are used when recognizing the whole person in motion we conducted two experiments using naturalistic videos. In Experiment 1 we used a matching task in which the first stimulus in each pair could either be a video or multiple still images from a video of the full body. The second stimulus, on which person recognition was performed, could be an image of either the full body or face alone. We found that the body contributed to person recognition beyond the face, but only after exposure to motion. Since person recognition was performed on still images, the contribution of motion to person recognition was mediated by form-from-motion processes. To assess whether dynamic identity signatures may also contribute to person recognition, in Experiment 2 we presented people in motion and examined person recognition from videos compared to still images. Results show that dynamic identity signatures did not contribute to person recognition beyond form-from-motion processes. We conclude that the face, body and form-from-motion processes all appear to play a role in unfamiliar person recognition, suggesting the importance of considering the whole body and motion when examining person perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Body schema and corporeal self-recognition in the alien hand syndrome.

    Science.gov (United States)

    Olgiati, Elena; Maravita, Angelo; Spandri, Viviana; Casati, Roberta; Ferraro, Francesco; Tedesco, Lucia; Agostoni, Elio Clemente; Bolognini, Nadia

    2017-07-01

    The alien hand syndrome (AHS) is a rare neuropsychological disorder characterized by involuntary, yet purposeful, hand movements. Patients with the AHS typically complain about a loss of agency associated with a feeling of estrangement for actions performed by the affected limb. The present study explores the integrity of the body representation in AHS, focusing on 2 main processes: multisensory integration and visual self-recognition of body parts. Three patients affected by AHS following a right-hemisphere stroke, with clinical symptoms akin to the posterior variant of AHS, were tested and their performance was compared with that of 18 age-matched healthy controls. AHS patients and controls underwent 2 experimental tasks: a same-different visual matching task for body postures, which assessed the ability of using your own body schema for encoding others' body postural changes (Experiment 1), and an explicit self-hand recognition task, which assessed the ability to visually recognize your own hands (Experiment 2). As compared to controls, all AHS patients were unable to access a reliable multisensory representation of their alien hand and use it for decoding others' postural changes; however, they could rely on an efficient multisensory representation of their intact (ipsilesional) hand. Two AHS patients also presented with a specific impairment in the visual self-recognition of their alien hand, but normal recognition of their intact hand. This evidence suggests that the AHS following a right-hemisphere stroke may involve a disruption of the multisensory representation of the alien limb; instead, self-hand recognition mechanisms may be spared. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  5. Hand Motion Classification Using a Multi-Channel Surface Electromyography Sensor

    Directory of Open Access Journals (Sweden)

    Dong Sun

    2012-01-01

    Full Text Available The human hand has multiple degrees of freedom (DOF for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  6. Hand motion classification using a multi-channel surface electromyography sensor.

    Science.gov (United States)

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  7. Combining heterogenous features for 3D hand-held object recognition

    Science.gov (United States)

    Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang

    2014-10-01

    Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.

  8. An Enhanced Intelligent Handheld Instrument with Visual Servo Control for 2-DOF Hand Motion Error Compensation

    Directory of Open Access Journals (Sweden)

    Yan Naing Aye

    2013-10-01

    Full Text Available The intelligent handheld instrument, ITrem2, enhances manual positioning accuracy by cancelling erroneous hand movements and, at the same time, provides automatic micromanipulation functions. Visual data is acquired from a high speed monovision camera attached to the optical surgical microscope and acceleration measurements are acquired from the inertial measurement unit (IMU on board ITrem2. Tremor estimation and canceling is implemented via Band-limited Multiple Fourier Linear Combiner (BMFLC filter. The piezoelectric actuated micromanipulator in ITrem2 generates the 3D motion to compensate erroneous hand motion. Preliminary bench-top 2-DOF experiments have been conducted. The error motions simulated by a motion stage is reduced by 67% for multiple frequency oscillatory motions and 56.16% for pre-conditioned recorded physiological tremor.

  9. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  10. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  11. A video-based system for hand-driven stop-motion animation.

    Science.gov (United States)

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  12. A triboelectric motion sensor in wearable body sensor network for human activity recognition.

    Science.gov (United States)

    Hui Huang; Xian Li; Ye Sun

    2016-08-01

    The goal of this study is to design a novel triboelectric motion sensor in wearable body sensor network for human activity recognition. Physical activity recognition is widely used in well-being management, medical diagnosis and rehabilitation. Other than traditional accelerometers, we design a novel wearable sensor system based on triboelectrification. The triboelectric motion sensor can be easily attached to human body and collect motion signals caused by physical activities. The experiments are conducted to collect five common activity data: sitting and standing, walking, climbing upstairs, downstairs, and running. The k-Nearest Neighbor (kNN) clustering algorithm is adopted to recognize these activities and validate the feasibility of this new approach. The results show that our system can perform physical activity recognition with a successful rate over 80% for walking, sitting and standing. The triboelectric structure can also be used as an energy harvester for motion harvesting due to its high output voltage in random low-frequency motion.

  13. Self-recognition of avatar motion: how do I know it's me?

    Science.gov (United States)

    Cook, Richard; Johnston, Alan; Heyes, Cecilia

    2012-02-22

    When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.

  14. Effects of Isometric Hand-Grip Muscle Contraction on Young Adults' Free Recall and Recognition Memory

    Science.gov (United States)

    Tomporowski, Phillip D.; Albrecht, Chelesa; Pendleton, Daniel M.

    2017-01-01

    Purpose: The purpose of this study was to determine if physical arousal produced by isometric hand-dynamometer contraction performed during word-list learning affects young adults' free recall or recognition memory. Method: Twenty-four young adults (12 female; M[subscript age] = 22 years) were presented with 4 20-item word lists. Moderate arousal…

  15. Action Recognition Using Motion Primitives and Probabilistic Edit Distance

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2006-01-01

    In this paper we describe a recognition approach based on the notion of primitives. As opposed to recognizing actions based on temporal trajectories or temporal volumes, primitive-based recognition is based on representing a temporal sequence containing an action by only a few characteristic time...... into a string containing a sequence of symbols, each representing a primitives. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The approach is evaluated on five one-arm gestures and the recognition rate is 91...

  16. Discriminative Vision-Based Recovery and Recognition of Human Motion

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    2009-01-01

    The automatic analysis of human motion from images opens up the way for applications in the domains of security and surveillance, human-computer interaction, animation, retrieval and sports motion analysis. In this dissertation, the focus is on robust and fast human pose recovery and action

  17. Unsteady hydrodynamic forces acting on a hand and its flow field during sculling motion.

    Science.gov (United States)

    Takagi, Hideki; Shimada, Shohei; Miwa, Takahiro; Kudo, Shigetada; Sanders, Ross; Matsuuchi, Kazuo

    2014-12-01

    The goal of this research is to clarify the mechanism by which unsteady forces are generated during sculling by a skilled swimmer and thereby to contribute to improving propulsive techniques. We used particle image velocimetry (PIV) to acquire data on the kinematics of the hand during sculling, such as fluid forces and flow field. By investigating the correlations between these data, we expected to find a new propulsion mechanism. The experiment was performed in a flow-controlled water channel. The participant executed sculling motions to remain at a fixed position despite constant water flow. PIV was used to visualize the flow-field cross-section in the plane of hand motion. Moreover, the fluid forces acting on the hand were estimated from pressure distribution measurements performed on the hand and simultaneous three-dimensional motion analysis. By executing the sculling motion, a skilled swimmer produces large unsteady fluid forces when the leading-edge vortex occurs on the dorsal side of the hand and wake capture occurs on the palm side. By using a new approach, we observed interesting unsteady fluid phenomena similar to those of flying insects. The study indicates that it is essential for swimmers to fully exploit vortices. A better understanding of these phenomena might lead to an improvement in sculling techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Using virtual data for training deep model for hand gesture recognition

    Science.gov (United States)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  19. Entropic Movement Complexity Reflects Subjective Creativity Rankings of Visualized Hand Motion Trajectories

    Science.gov (United States)

    Peng, Zhen; Braun, Daniel A.

    2015-01-01

    In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896

  20. Hand posture effects on handedness recognition as revealed by the Simon effect

    Directory of Open Access Journals (Sweden)

    Allan P Lameira

    2009-11-01

    Full Text Available We investigated the influence of hand posture in handedness recognition, while varying the spatial correspondence between stimulus and response in a modified Simon task. Drawings of the left and right hands were displayed either in a back or palm view while participants discriminated stimulus handedness by pressing left/right keys with their hands resting either in a prone or supine posture. As a control, subjects performed a regular Simon task using simple geometric shapes as stimuli. Results showed that when hands were in a prone posture, the spatially corresponding trials (i.e., stimulus and response located on the same side were faster than the non-corresponding trials (i.e., stimulus and response on opposite sides. In contrast, for the supine posture, there was no difference between corresponding and non-corresponding trials. The control experiment with the regular Simon task showed that the posture of the responding hand had no influence on performance. When the stimulus is the drawing of a hand, however, the posture of the responding hand affects the spatial correspondence effect because response location is coded based on multiple reference points, including the body of the hand.

  1. An observational study of implicit motor imagery using laterality recognition of the hand after stroke.

    Science.gov (United States)

    Amesz, Sarah; Tessari, Alessia; Ottoboni, Giovanni; Marsden, Jon

    2016-01-01

    To explore the relationship between laterality recognition after stroke and impairments in attention, 3D object rotation and functional ability. Observational cross-sectional study. Acute care teaching hospital. Thirty-two acute and sub-acute people with stroke and 36 healthy, age-matched controls. Laterality recognition, attention and mental rotation of objects. Within the stroke group, the relationship between laterality recognition and functional ability, neglect, hemianopia and dyspraxia were further explored. People with stroke were significantly less accurate (69% vs 80%) and showed delayed reaction times (3.0 vs 1.9 seconds) when determining the laterality of a pictured hand. Deficits either in accuracy or reaction times were seen in 53% of people with stroke. The accuracy of laterality recognition was associated with reduced functional ability (R(2) = 0.21), less accurate mental rotation of objects (R(2) = 0.20) and dyspraxia (p = 0.03). Implicit motor imagery is affected in a significant number of patients after stroke with these deficits related to lesions to the motor networks as well as other deficits seen after stroke. This research provides new insights into how laterality recognition is related to a number of other deficits after stroke, including the mental rotation of 3D objects, attention and dyspraxia. Further research is required to determine if treatment programmes can improve deficits in laterality recognition and impact functional outcomes after stroke.

  2. Shape-based hand recognition approach using the morphological pattern spectrum

    Science.gov (United States)

    Ramirez-Cortes, Juan Manuel; Gomez-Gil, Pilar; Sanchez-Perez, Gabriel; Prieto-Castro, Cesar

    2009-01-01

    We propose the use of the morphological pattern spectrum, or pecstrum, as the base of a biometric shape-based hand recognition system. The system receives an image of the right hand of a subject in an unconstrained pose, which is captured with a commercial flatbed scanner. According to pecstrum property of invariance to translation and rotation, the system does not require the use of pegs for a fixed hand position, which simplifies the image acquisition process. This novel feature-extraction method is tested using a Euclidean distance classifier for identification and verification cases, obtaining 97% correct identification, and an equal error rate (EER) of 0.0285 (2.85%) for the verification mode. The obtained results indicate that the pattern spectrum represents a good feature-extraction alternative for low- and medium-level hand-shape-based biometric applications.

  3. Activity Recognition Invariant to Sensor Orientation with Wearable Motion Sensors.

    Science.gov (United States)

    Yurtman, Aras; Barshan, Billur

    2017-08-09

    Most activity recognition studies that employ wearable sensors assume that the sensors are attached at pre-determined positions and orientations that do not change over time. Since this is not the case in practice, it is of interest to develop wearable systems that operate invariantly to sensor position and orientation. We focus on invariance to sensor orientation and develop two alternative transformations to remove the effect of absolute sensor orientation from the raw sensor data. We test the proposed methodology in activity recognition with four state-of-the-art classifiers using five publicly available datasets containing various types of human activities acquired by different sensor configurations. While the ordinary activity recognition system cannot handle incorrectly oriented sensors, the proposed transformations allow the sensors to be worn at any orientation at a given position on the body, and achieve nearly the same activity recognition performance as the ordinary system for which the sensor units are not rotatable. The proposed techniques can be applied to existing wearable systems without much effort, by simply transforming the time-domain sensor data at the pre-processing stage.

  4. A motion-planning method for dexterous hand operating a tool based on bionic analysis

    Directory of Open Access Journals (Sweden)

    Wei Bo

    2017-01-01

    Full Text Available In order to meet the needs of robot’s operating tools for different types and sizes, the dexterous hand is studied by many scientific research institutions. However, the large number of joints in a dexterous hand leads to the difficulty of motion planning. Aiming at this problem, this paper proposes a planning method abased on BPNN inspired by human hands. Firstly, this paper analyses the structure and function of the human hand and summarizes its typical strategy of operation. Secondly, based on the manual operation strategy, the tools are classified according to the shape and the operation mode of the dexterous hand is presented. Thirdly, the BPNN is used to train the humanoid operation, and then output the operation plan. Finally, the simulating experiments of grasping simple tools and operating complex tools are made by MATLAB and ADAMS. The simulation verifies the effectiveness of this method.

  5. A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies.

    Science.gov (United States)

    Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca

    2017-04-15

    Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.

  6. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    Science.gov (United States)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  7. Motion Imitation and Recognition using Parametric Hidden Markov Models

    DEFF Research Database (Denmark)

    Herzog, Dennis; Ude, Ales; Krüger, Volker

    2008-01-01

    ) are important. Only together they convey the whole meaning of an action. Similarly, to imitate a movement, the robot needs to select the proper action and parameterize it, e.g., by the relative position of the object that needs to be grasped. We propose to utilize parametric hidden Markov models (PHMMs), which...... extend the classical HMMs by introducing a joint parameterization of the observation densities, to simultaneously solve the problems of action recognition, parameterization of the observed actions, and action synthesis. The proposed approach was fully implemented on a humanoid robot HOAP-3. To evaluate...... the approach, we focused on reaching and pointing actions. Even though the movements are very similar in appearance, our approach is able to distinguish the two movement types and discover the parameterization, and is thus enabling both, action recognition and action synthesis. Through parameterization we...

  8. Changing predictions, stable recognition: Children’s representations of downward incline motion

    OpenAIRE

    Hast, Michael; Howe, Christine

    2017-01-01

    Various studies to-date have demonstrated children hold ill-conceived expressed beliefs about the physical world such as that one ball will fall faster than another because it is heavier. At the same time they also demonstrate accurate recognition of dynamic events. How these representations relate is still unresolved. This study examined 5- to 11-year-olds’ (N = 130) predictions and recognition of motion down inclines. Predictions were typically in error, matching previous work, but children...

  9. Motion Normalized Proportional Control for Improved Pattern Recognition-Based Myoelectric Control.

    Science.gov (United States)

    Scheme, Erik; Lock, Blair; Hargrove, Levi; Hill, Wendy; Kuruganti, Usha; Englehart, Kevin

    2014-01-01

    This paper describes two novel proportional control algorithms for use with pattern recognition-based myoelectric control. The systems were designed to provide automatic configuration of motion-specific gains and to normalize the control space to the user's usable dynamic range. Class-specific normalization parameters were calculated using data collected during classifier training and require no additional user action or configuration. The new control schemes were compared to the standard method of deriving proportional control using a one degree of freedom Fitts' law test for each of the wrist flexion/extension, wrist pronation/supination and hand close/open degrees of freedom. Performance was evaluated using the Fitts' law throughput value as well as more descriptive metrics including path efficiency, overshoot, stopping distance and completion rate. The proposed normalization methods significantly outperformed the incumbent method in every performance category for able bodied subjects (p < 0.001) and nearly every category for amputee subjects. Furthermore, one proposed method significantly outperformed both other methods in throughput (p < 0.0001), yielding 21% and 40% improvement over the incumbent method for amputee and able bodied subjects, respectively. The proposed control schemes represent a computationally simple method of fundamentally improving myoelectric control users' ability to elicit robust, and controlled, proportional velocity commands.

  10. Patient cloth with motion recognition sensors based on flexible piezoelectric materials.

    Science.gov (United States)

    Youngsu Cha; Kihyuk Nam; Doik Kim

    2017-07-01

    In this paper, we introduce a patient cloth for position monitoring using motion recognition sensors based on flexible piezoelectric materials. The motion recognition sensors are embedded in three parts, which are the knee, hip and back, in the patient cloth. We use polyvinylidene fluoride (PVDF) as the flexible piezoelectric material for the sensors. By using the piezoelectric effect of the PVDF, we detect electrical signals when the cloth is bent or extended. We analyze the sensing values for our human motions by processing the sensor outputs in a custom-made program. Specifically, we focus on the transitions between standing and sitting, and sitting knee extension and supine position, which are important motions for patient monitoring.

  11. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    Science.gov (United States)

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  12. Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.

    Science.gov (United States)

    Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan

    2018-04-01

    Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.

  13. Feasibility of an AI-Based Measure of the Hand Motions of Expert and Novice Surgeons

    Directory of Open Access Journals (Sweden)

    Munenori Uemura

    2018-01-01

    Full Text Available This study investigated whether parameters derived from hand motions of expert and novice surgeons accurately and objectively reflect laparoscopic surgical skill levels using an artificial intelligence system consisting of a three-layer chaos neural network. Sixty-seven surgeons (23 experts and 44 novices performed a laparoscopic skill assessment task while their hand motions were recorded using a magnetic tracking sensor. Eight parameters evaluated as measures of skill in a previous study were used as inputs to the neural network. Optimization of the neural network was achieved after seven trials with a training dataset of 38 surgeons, with a correct judgment ratio of 0.99. The neural network that prospectively worked with the remaining 29 surgeons had a correct judgment rate of 79% for distinguishing between expert and novice surgeons. In conclusion, our artificial intelligence system distinguished between expert and novice surgeons among surgeons with unknown skill levels.

  14. Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View.

    Science.gov (United States)

    Bambach, Sven; Crandall, David J; Yu, Chen

    2015-11-01

    Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer's everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction.

  15. Design of a compact low-power human-computer interaction equipment for hand motion

    Science.gov (United States)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  16. Postural stability when walking and exposed to lateral oscillatory motion: benefits from hand supports.

    Science.gov (United States)

    Ayık, Hatice Müjde; Griffin, Michael J

    2015-01-01

    While walking on a treadmill, 20 subjects experienced lateral oscillations: frequencies from 0.5 to 2 Hz and velocities from 0.05 to 0.16 m s(- 1) rms. Postural stability was indicated by ratings of 'discomfort or difficulty in walking', the movement of the centre of pressure beneath the feet and lateral forces applied to a hand support. Hand support improved postural stability with all frequencies and all velocities of oscillatory motion: the lateral velocity of the centre of pressure reduced by 30-50% when using support throughout motion, by 20-30% when instructed to use the support only when required and by 15% during normal walking without oscillation. Improvements in stability, and the forces applied to the hand support, were independent of support height when used continuously throughout motion. When support was used only when required, subjects preferred to hold it 118-134 cm above the surface supporting the feet.

  17. Use of a machine learning algorithm to classify expertise: analysis of hand motion patterns during a simulated surgical task.

    Science.gov (United States)

    Watson, Robert A

    2014-08-01

    To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.

  18. Micro-motion Recognition of Spatial Cone Target Based on ISAR Image Sequences

    Directory of Open Access Journals (Sweden)

    Changyong Shu

    2016-04-01

    Full Text Available The accurate micro-motions recognition of spatial cone target is the foundation of the characteristic parameter acquisition. For this reason, a micro-motion recognition method based on the distinguishing characteristics extracted from the Inverse Synthetic Aperture Radar (ISAR sequences is proposed in this paper. The projection trajectory formula of cone node strong scattering source and cone bottom slip-type strong scattering sources, which are located on the spatial cone target, are deduced under three micro-motion types including nutation, precession, and spinning, and the correctness is verified by the electromagnetic simulation. By comparison, differences are found among the projection of the scattering sources with different micro-motions, the coordinate information of the scattering sources in the Inverse Synthetic Aperture Radar sequences is extracted by the CLEAN algorithm, and the spinning is recognized by setting the threshold value of Doppler. The double observation points Interacting Multiple Model Kalman Filter is used to separate the scattering sources projection of the nutation target or precession target, and the cross point number of each scattering source’s projection track is used to classify the nutation or precession. Finally, the electromagnetic simulation data are used to verify the effectiveness of the micro-motion recognition method.

  19. Effectiveness of adaptive silverware on range of motion of the hand

    Directory of Open Access Journals (Sweden)

    Susan S. McDonald

    2016-02-01

    Full Text Available Background. Hand function is essential to a person’s self-efficacy and greatly affects quality of life. Adapted utensils with handles of increased diameters have historically been used to assist individuals with arthritis or other hand disabilities for feeding, and other related activities of daily living. To date, minimal research has examined the biomechanical effects of modified handles, or quantified the differences in ranges of motion (ROM when using a standard versus a modified handle. The aim of this study was to quantify the ranges of motion (ROM required for a healthy hand to use different adaptive spoons with electrogoniometry for the purpose of understanding the physiologic advantages that adapted spoons may provide patients with limited ROM. Methods. Hand measurements included the distal interphalangeal joint (DIP, proximal interphalangeal joint (PIP, and metacarpophalangeal joint (MCP for each finger and the interphalangeal (IP and MCP joint for the thumb. Participants were 34 females age 18–30 (mean age 20.38 ± 1.67 with no previous hand injuries or abnormalities. Participants grasped spoons with standard handles, and spoons with handle diameters of 3.18 cm (1.25 inch, and 4.45 cm (1.75 inch. ROM measurements were obtained with an electrogoniometer to record the angle at each joint for each of the spoon handle sizes. Results. A 3 × 3 × 4 repeated measures ANOVA (Spoon handle size by Joint by Finger found main effects on ROM of Joint (F(2, 33 = 318.68, Partial η2 = .95, p < .001, Spoon handle size (F(2, 33 = 598.73, Partial η2 = .97, p < .001, and Finger (F(3, 32 = 163.83, Partial η2 = .94, p < .001. As the spoon handle diameter size increased, the range of motion utilized to grasp the spoon handle decreased in all joints and all fingers (p < 0.01. Discussion. This study confirms the hypothesis that less range of motion is required to grip utensils with larger diameter handles, which in turn may reduce challenges for

  20. Modeling of hand function by mapping the motion of individual muscle voxels with MR imaging velocity tagging

    International Nuclear Information System (INIS)

    Drace, J.; Pele, N.; Herfkens, R.J.

    1990-01-01

    This paper reports on a method to correlate the three-dimensional (3D) motion of the fingers with the complex motion of the intrinsic, flexor, and extensor muscles. A better understanding of hand function is important to the medical, surgical, and rehabilitation treatment of patients with arthritic, neurogenic, and mechanical hand dysfunctions. Static, high-resolution MR volumetric imaging defines the 3D shape of each individual bone in the hands of three subjects and three patients. Single-section velocity-tagging sequences (VIGOR) are performed through the hand and forearm, while the actual 3D motion of the hand is computed from the MR model and readings of fiber-optic goniometers attached to each finger. The accuracy of the velocity tagging is also tested with a motion phantom

  1. Impact of body posture on laterality judgement and explicit recognition tasks performed on self and others' hands.

    Science.gov (United States)

    Conson, Massimiliano; Errico, Domenico; Mazzarella, Elisabetta; De Bellis, Francesco; Grossi, Dario; Trojano, Luigi

    2015-04-01

    Judgments on laterality of hand stimuli are faster and more accurate when dealing with one's own than others' hand, i.e. the self-advantage. This advantage seems to be related to activation of a sensorimotor mechanism while implicitly processing one's own hands, but not during explicit one's own hand recognition. Here, we specifically tested the influence of proprioceptive information on the self-hand advantage by manipulating participants' body posture during self and others' hand processing. In Experiment 1, right-handed healthy participants judged laterality of either self or others' hands, whereas in Experiment 2, an explicit recognition of one's own hands was required. In both experiments, the participants performed the task while holding their left or right arm flexed with their hand in direct contact with their chest ("flexed self-touch posture") or with their hand placed on a wooden smooth surface in correspondence with their chest ("flexed proprioceptive-only posture"). In an "extended control posture", both arms were extended and in contact with thighs. In Experiment 1 (hand laterality judgment), we confirmed the self-advantage and demonstrated that it was enhanced when the subjects judged left-hand stimuli at 270° orientation while keeping their left arm in the flexed proprioceptive-only posture. In Experiment 2 (explicit self-hand recognition), instead, we found an advantage for others' hand ("self-disadvantage") independently from posture manipulation. Thus, position-related proprioceptive information from left non-dominant arm can enhance sensorimotor one's own body representation selectively favouring implicit self-hands processing.

  2. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming.

    Science.gov (United States)

    Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara

    2010-03-01

    We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.

  3. Surface EMG signals based motion intent recognition using multi-layer ELM

    Science.gov (United States)

    Wang, Jianhui; Qi, Lin; Wang, Xiao

    2017-11-01

    The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.

  4. Research on Three-dimensional Motion History Image Model and Extreme Learning Machine for Human Body Movement Trajectory Recognition

    Directory of Open Access Journals (Sweden)

    Zheng Chang

    2015-01-01

    Full Text Available Based on the traditional machine vision recognition technology and traditional artificial neural networks about body movement trajectory, this paper finds out the shortcomings of the traditional recognition technology. By combining the invariant moments of the three-dimensional motion history image (computed as the eigenvector of body movements and the extreme learning machine (constructed as the classification artificial neural network of body movements, the paper applies the method to the machine vision of the body movement trajectory. In detail, the paper gives a detailed introduction about the algorithm and realization scheme of the body movement trajectory recognition based on the three-dimensional motion history image and the extreme learning machine. Finally, by comparing with the results of the recognition experiments, it attempts to verify that the method of body movement trajectory recognition technology based on the three-dimensional motion history image and extreme learning machine has a more accurate recognition rate and better robustness.

  5. Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Michalis Papakostas

    2017-06-01

    Full Text Available Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features.

  6. Gesture Recognition from Data Streams of Human Motion Sensor Using Accelerated PSO Swarm Search Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2015-01-01

    Full Text Available Human motion sensing technology gains tremendous popularity nowadays with practical applications such as video surveillance for security, hand signing, and smart-home and gaming. These applications capture human motions in real-time from video sensors, the data patterns are nonstationary and ever changing. While the hardware technology of such motion sensing devices as well as their data collection process become relatively mature, the computational challenge lies in the real-time analysis of these live feeds. In this paper we argue that traditional data mining methods run short of accurately analyzing the human activity patterns from the sensor data stream. The shortcoming is due to the algorithmic design which is not adaptive to the dynamic changes in the dynamic gesture motions. The successor of these algorithms which is known as data stream mining is evaluated versus traditional data mining, through a case of gesture recognition over motion data by using Microsoft Kinect sensors. Three different subjects were asked to read three comic strips and to tell the stories in front of the sensor. The data stream contains coordinates of articulation points and various positions of the parts of the human body corresponding to the actions that the user performs. In particular, a novel technique of feature selection using swarm search and accelerated PSO is proposed for enabling fast preprocessing for inducing an improved classification model in real-time. Superior result is shown in the experiment that runs on this empirical data stream. The contribution of this paper is on a comparative study between using traditional and data stream mining algorithms and incorporation of the novel improved feature selection technique with a scenario where different gesture patterns are to be recognized from streaming sensor data.

  7. Development of a robotic evaluation system for the ability of proprioceptive sensation in slow hand motion.

    Science.gov (United States)

    Tanaka, Yoshiyuki; Mizoe, Genki; Kawaguchi, Tomohiro

    2015-01-01

    This paper proposes a simple diagnostic methodology for checking the ability of proprioceptive/kinesthetic sensation by using a robotic device. The perception ability of virtual frictional forces is examined in operations of the robotic device by the hand at a uniform slow velocity along the virtual straight/circular path. Experimental results by healthy subjects demonstrate that percentage of correct answers for the designed perceptual tests changes in the motion direction as well as the arm configuration and the HFM (human force manipulability) measure. It can be supposed that the proposed methodology can be applied into the early detection of neuromuscular/neurological disorders.

  8. Universal Robot Hand Equipped with Tactile and Joint Torque Sensors: Development and Experiments on Stiffness Control and Object Recognition

    Directory of Open Access Journals (Sweden)

    Hiroyuki NAKAMOTO

    2007-04-01

    Full Text Available Various humanoid robots have been developed and multifunction robot hands which are able to attach those robots like human hand is needed. But a useful robot hand has not been depeveloped, because there are a lot of problems such as control method of many degrees of freedom and processing method of enormous sensor outputs. Realizing such robot hand, we have developed five-finger robot hand. In this paper, the detailed structure of developed robot hand is described. The robot hand we developed has five fingers of multi-joint that is equipped with joint torque sensors and tactile sensors. We report experimental results of a stiffness control with the developed robot hand. Those results show that it is possible to change the stiffness of joints. Moreover we propose an object recognition method with the tactile sensor. The validity of that method is assured by experimental results.

  9. A novel rotational invariants target recognition method for rotating motion blurred images

    Science.gov (United States)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  10. An Online Full-Body Motion Recognition Method Using Sparse and Deficient Signal Sequences

    Directory of Open Access Journals (Sweden)

    Chengyu Guo

    2014-01-01

    Full Text Available This paper presents a method to recognize continuous full-body human motion online by using sparse, low-cost sensors. The only input signals needed are linear accelerations without any rotation information, which are provided by four Wiimote sensors attached to the four human limbs. Based on the fused hidden Markov model (FHMM and autoregressive process, a predictive fusion model (PFM is put forward, which considers the different influences of the upper and lower limbs, establishes HMM for each part, and fuses them using a probabilistic fusion model. Then an autoregressive process is introduced in HMM to predict the gesture, which enables the model to deal with incomplete signal data. In order to reduce the number of alternatives in the online recognition process, a graph model is built that rejects parts of motion types based on the graph structure and previous recognition results. Finally, an online signal segmentation method based on semantics information and PFM is presented to finish the efficient recognition task. The results indicate that the method is robust with a high recognition rate of sparse and deficient signals and can be used in various interactive applications.

  11. Kinect-based sign language recognition of static and dynamic hand movements

    Science.gov (United States)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  12. Atypical activation of the mirror neuron system during perception of hand motion in autism.

    Science.gov (United States)

    Martineau, Joëlle; Andersson, Frédéric; Barthélémy, Catherine; Cottier, Jean-Philippe; Destrieux, Christophe

    2010-03-12

    Disorders in the autism spectrum are characterized by deficits in social and communication skills such as imitation, pragmatic language, theory of mind, and empathy. The discovery of the "mirror neuron system" (MNS) in macaque monkeys may provide a basis from which to explain some of the behavioral dysfunctions seen in individuals with autism spectrum disorders (ASD).We studied seven right-handed high-functioning male autistic and eight normal subjects (TD group) using functional magnetic resonance imaging during observation and execution of hand movements compared to a control condition (rest). The between group comparison of the contrast [observation versus rest] provided evidence of a bilateral greater activation of inferior frontal gyrus during observation of human motion than during rest for the ASD group than for the TD group. This hyperactivation of the pars opercularis (belonging to the MNS) during observation of human motion in autistic subjects provides strong support for the hypothesis of atypical activity of the MNS that may be at the core of the social deficits in autism. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Finger Angle-Based Hand Gesture Recognition for Smart Infrastructure Using Wearable Wrist-Worn Camera

    Directory of Open Access Journals (Sweden)

    Feiyu Chen

    2018-03-01

    Full Text Available The arising of domestic robots in smart infrastructure has raised demands for intuitive and natural interaction between humans and robots. To address this problem, a wearable wrist-worn camera (WwwCam is proposed in this paper. With the capability of recognizing human hand gestures in real-time, it enables services such as controlling mopping robots, mobile manipulators, or appliances in smart-home scenarios. The recognition is based on finger segmentation and template matching. Distance transformation algorithm is adopted and adapted to robustly segment fingers from the hand. Based on fingers’ angles relative to the wrist, a finger angle prediction algorithm and a template matching metric are proposed. All possible gesture types of the captured image are first predicted, and then evaluated and compared to the template image to achieve the classification. Unlike other template matching methods relying highly on large training set, this scheme possesses high flexibility since it requires only one image as the template, and can classify gestures formed by different combinations of fingers. In the experiment, it successfully recognized ten finger gestures from number zero to nine defined by American Sign Language with an accuracy up to 99.38%. Its performance was further demonstrated by manipulating a robot arm using the implemented algorithms and WwwCam to transport and pile up wooden building blocks.

  14. Self-Organizing Neural Integration of Pose-Motion Features for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    German Ignacio Parisi

    2015-06-01

    Full Text Available The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented towards human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR networks that obtain progressively generalized representations of sensory inputs and learn inherent spatiotemporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best 21 results for a public benchmark of domestic daily actions.

  15. ALPHABET SIGN LANGUAGE RECOGNITION USING LEAP MOTION TECHNOLOGY AND RULE BASED BACKPROPAGATION-GENETIC ALGORITHM NEURAL NETWORK (RBBPGANN

    Directory of Open Access Journals (Sweden)

    Wijayanti Nurul Khotimah

    2017-01-01

    Full Text Available Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%. Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language in SIBI (Sign System of Indonesian Language which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN, was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN. Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm.

  16. The scaling behavior of hand motions reveals self-organization during an executive function task

    Science.gov (United States)

    Anastas, Jason R.; Stephen, Damian G.; Dixon, James A.

    2011-05-01

    Recent approaches to cognition explain cognitive phenomena in terms of interaction-dominant dynamics. In the current experiment, we extend this approach to executive function, a construct used to describe flexible, goal-oriented behavior. Participants were asked to perform a widely used executive function task, card sorting, under two conditions. In one condition, participants were given a rule with which to sort the cards. In the other condition, participants had to induce the rule from experimenter feedback. The motion of each participant’s hand was tracked during the sorting task. Detrended fluctuation analysis was performed on the inter-point time series using a windowing strategy to capture changes over each trial. For participants in the induction condition, the Hurst exponent sharply increased and then decreased. The Hurst exponents for the explicit condition did not show this pattern. Our results suggest that executive function may be understood in terms of changes in stability that arise from interaction-dominant dynamics.

  17. Impact of Sliding Window Length in Indoor Human Motion Modes and Pose Pattern Recognition Based on Smartphone Sensors

    Directory of Open Access Journals (Sweden)

    Gaojing Wang

    2018-06-01

    Full Text Available Human activity recognition (HAR is essential for understanding people’s habits and behaviors, providing an important data source for precise marketing and research in psychology and sociology. Different approaches have been proposed and applied to HAR. Data segmentation using a sliding window is a basic step during the HAR procedure, wherein the window length directly affects recognition performance. However, the window length is generally randomly selected without systematic study. In this study, we examined the impact of window length on smartphone sensor-based human motion and pose pattern recognition. With data collected from smartphone sensors, we tested a range of window lengths on five popular machine-learning methods: decision tree, support vector machine, K-nearest neighbor, Gaussian naïve Bayesian, and adaptive boosting. From the results, we provide recommendations for choosing the appropriate window length. Results corroborate that the influence of window length on the recognition of motion modes is significant but largely limited to pose pattern recognition. For motion mode recognition, a window length between 2.5–3.5 s can provide an optimal tradeoff between recognition performance and speed. Adaptive boosting outperformed the other methods. For pose pattern recognition, 0.5 s was enough to obtain a satisfactory result. In addition, all of the tested methods performed well.

  18. Identification of hand motion using background subtraction method and extraction of image binary with backpropagation neural network on skeleton model

    Science.gov (United States)

    Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati

    2018-03-01

    Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.

  19. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  20. A Novel Model-Based Driving Behavior Recognition System Using Motion Sensors

    Directory of Open Access Journals (Sweden)

    Minglin Wu

    2016-10-01

    Full Text Available In this article, a novel driving behavior recognition system based on a specific physical model and motion sensory data is developed to promote traffic safety. Based on the theory of rigid body kinematics, we build a specific physical model to reveal the data change rule during the vehicle moving process. In this work, we adopt a nine-axis motion sensor including a three-axis accelerometer, a three-axis gyroscope and a three-axis magnetometer, and apply a Kalman filter for noise elimination and an adaptive time window for data extraction. Based on the feature extraction guided by the built physical model, various classifiers are accomplished to recognize different driving behaviors. Leveraging the system, normal driving behaviors (such as accelerating, braking, lane changing and turning with caution and aggressive driving behaviors (such as accelerating, braking, lane changing and turning with a sudden can be classified with a high accuracy of 93.25%. Compared with traditional driving behavior recognition methods using machine learning only, the proposed system possesses a solid theoretical basis, performs better and has good prospects.

  1. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  2. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    Science.gov (United States)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  3. Sketch Style Recognition, Transfer and Synthesis of Hand-Drawn Sketches

    KAUST Repository

    Shaheen, Sara

    2017-07-19

    Humans have always used sketches to explain the visual world. It is a simple and straight- forward mean to communicate new ideas and designs. Consequently, as in almost every aspect of our modern life, the relatively recent major developments in computer science have highly contributed to enhancing individual sketching experience. The literature of sketch related research has witnessed seminal advancements and a large body of interest- ing work. Following up with this rich literature, this dissertation provides a holistic study on sketches through three proposed novel models including sketch analysis, transfer, and geometric representation. The first part of the dissertation targets sketch authorship recognition and analysis of sketches. It provides answers to the following questions: Are simple strokes unique to the artist or designer who renders them? If so, can this idea be used to identify authorship or to classify artistic drawings? The proposed stroke authorship recognition approach is a novel method that distinguishes the authorship of 2D digitized drawings. This method converts a drawing into a histogram of stroke attributes that is discriminative of authorship. Extensive classification experiments on a large variety of datasets are conducted to validate the ability of the proposed techniques to distinguish unique authorship of artists and designers. The second part of the dissertation is concerned with sketch style transfer from one free- hand drawing to another. The proposed method exploits techniques from multi-disciplinary areas including geometrical modeling and image processing. It consists of two methods of transfer: stroke-style and brush-style transfer. (1) Stroke-style transfer aims to transfer the style of the input sketch at the stroke level to the style encountered in other sketches by other artists. This is done by modifying all the parametric stroke segments in the input, so as to minimize a global stroke-level distance between the input and

  4. 3D hand motion trajectory prediction from EEG mu and beta bandpower.

    Science.gov (United States)

    Korik, A; Sosnik, R; Siddique, N; Coyle, D

    2016-01-01

    A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs.

  5. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    Science.gov (United States)

    2014-01-01

    Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility. PMID:25276860

  6. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    Directory of Open Access Journals (Sweden)

    Bardia Yousefi

    2014-01-01

    Full Text Available Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility.

  7. Object instance recognition using motion cues and instance specific appearance models

    Science.gov (United States)

    Schumann, Arne

    2014-03-01

    In this paper we present an object instance retrieval approach. The baseline approach consists of a pool of image features which are computed on the bounding boxes of a query object track and compared to a database of tracks in order to find additional appearances of the same object instance. We improve over this simple baseline approach in multiple ways: 1) we include motion cues to achieve improved robustness to viewpoint and rotation changes, 2) we include operator feedback to iteratively re-rank the resulting retrieval lists and 3) we use operator feedback and location constraints to train classifiers and learn an instance specific appearance model. We use these classifiers to further improve the retrieval results. The approach is evaluated on two popular public datasets for two different applications. We evaluate person re-identification on the CAVIAR shopping mall surveillance dataset and vehicle instance recognition on the VIVID aerial dataset and achieve significant improvements over our baseline results.

  8. Improved Hip-Based Individual Recognition Using Wearable Motion Recording Sensor

    Science.gov (United States)

    Gafurov, Davrondzhon; Bours, Patrick

    In todays society the demand for reliable verification of a user identity is increasing. Although biometric technologies based on fingerprint or iris can provide accurate and reliable recognition performance, they are inconvenient for periodic or frequent re-verification. In this paper we propose a hip-based user recognition method which can be suitable for implicit and periodic re-verification of the identity. In our approach we use a wearable accelerometer sensor attached to the hip of the person, and then the measured hip motion signal is analysed for identity verification purposes. The main analyses steps consists of detecting gait cycles in the signal and matching two sets of detected gait cycles. Evaluating the approach on a hip data set consisting of 400 gait sequences (samples) from 100 subjects, we obtained equal error rate (EER) of 7.5% and identification rate at rank 1 was 81.4%. These numbers are improvements by 37.5% and 11.2% respectively of the previous study using the same data set.

  9. Hand pose recognition in First Person Vision through graph spectral analysis

    NARCIS (Netherlands)

    Baydoun, Mohamad; Betancourt, Alejandro; Morerio, Pietro; Marcenaro, Lucio; Rauterberg, Matthias; Regazzoni, Carlo

    2017-01-01

    © 2017 IEEE. With the growing availability of wearable technology, video recording devices have become so intimately tied to individuals, that they are able to record the movements of users' hands, making hand-based applications one the most explored area in First Person Vision (FPV). In particular,

  10. Clinical effectiveness of combined virtual reality and robot assisted fine hand motion rehabilitation in subacute stroke patients.

    Science.gov (United States)

    Huang, Xianwei; Naghdy, Fazel; Naghdy, Golshah; Du, Haiping

    2017-07-01

    Robot-assisted therapy is regarded as an effective and reliable method for the delivery of highly repetitive rehabilitation training in restoring motor skills after a stroke. This study focuses on the rehabilitation of fine hand motion skills due to their vital role in performing delicate activities of daily living (ADL) tasks. The proposed rehabilitation system combines an adaptive assist-as-needed (AAN) control algorithm and a Virtual Reality (VR) based rehabilitation gaming system (RGS). The developed system is described and its effectiveness is validated through clinical trials on a group of eight subacute stroke patients for a period of six weeks. The impact of the training is verified through standard clinical evaluation methods and measuring key kinematic parameters. A comparison of the pre- and post-training results indicates that the method proposed in this study can improve fine hand motion rehabilitation training effectiveness.

  11. Classifying multiple types of hand motions using electrocorticography during intraoperative awake craniotomy and seizure monitoring processes—case studies

    Science.gov (United States)

    Xie, Tao; Zhang, Dingguo; Wu, Zehan; Chen, Liang; Zhu, Xiangyang

    2015-01-01

    In this work, some case studies were conducted to classify several kinds of hand motions from electrocorticography (ECoG) signals during intraoperative awake craniotomy & extraoperative seizure monitoring processes. Four subjects (P1, P2 with intractable epilepsy during seizure monitoring and P3, P4 with brain tumor during awake craniotomy) participated in the experiments. Subjects performed three types of hand motions (Grasp, Thumb-finger motion and Index-finger motion) contralateral to the motor cortex covered with ECoG electrodes. Two methods were used for signal processing. Method I: autoregressive (AR) model with burg method was applied to extract features, and additional waveform length (WL) feature has been considered, finally the linear discriminative analysis (LDA) was used as the classifier. Method II: stationary subspace analysis (SSA) was applied for data preprocessing, and the common spatial pattern (CSP) was used for feature extraction before LDA decoding process. Applying method I, the three-class accuracy of P1~P4 were 90.17, 96.00, 91.77, and 92.95% respectively. For method II, the three-class accuracy of P1~P4 were 72.00, 93.17, 95.22, and 90.36% respectively. This study verified the possibility of decoding multiple hand motion types during an awake craniotomy, which is the first step toward dexterous neuroprosthetic control during surgical implantation, in order to verify the optimal placement of electrodes. The accuracy during awake craniotomy was comparable to results during seizure monitoring. This study also indicated that ECoG was a promising approach for precise identification of eloquent cortex during awake craniotomy, and might form a promising BCI system that could benefit both patients and neurosurgeons. PMID:26483627

  12. Learning through hand- or typewriting influences visual recognition of new graphic shapes: behavioral and functional imaging evidence.

    Science.gov (United States)

    Longcamp, Marieke; Boucard, Céline; Gilhodes, Jean-Claude; Anton, Jean-Luc; Roth, Muriel; Nazarian, Bruno; Velay, Jean-Luc

    2008-05-01

    Fast and accurate visual recognition of single characters is crucial for efficient reading. We explored the possible contribution of writing memory to character recognition processes. We evaluated the ability of adults to discriminate new characters from their mirror images after being taught how to produce the characters either by traditional pen-and-paper writing or with a computer keyboard. After training, we found stronger and longer lasting (several weeks) facilitation in recognizing the orientation of characters that had been written by hand compared to those typed. Functional magnetic resonance imaging recordings indicated that the response mode during learning is associated with distinct pathways during recognition of graphic shapes. Greater activity related to handwriting learning and normal letter identification was observed in several brain regions known to be involved in the execution, imagery, and observation of actions, in particular, the left Broca's area and bilateral inferior parietal lobules. Taken together, these results provide strong arguments in favor of the view that the specific movements memorized when learning how to write participate in the visual recognition of graphic shapes and letters.

  13. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    Science.gov (United States)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  14. Brain correlates of recognition of communicative interactions from biological motion in schizophrenia.

    Science.gov (United States)

    Okruszek, Ł; Wordecha, M; Jarkiewicz, M; Kossowski, B; Lee, J; Marchewka, A

    2017-11-27

    Recognition of communicative interactions is a complex social cognitive ability which is associated with a specific neural activity in healthy individuals. However, neural correlates of communicative interaction processing from whole-body motion have not been known in patients with schizophrenia (SCZ). Therefore, the current study aims to examine the neural activity associated with recognition of communicative interactions in SCZ by using displays of the dyadic interactions downgraded to minimalistic point-light presentations. Twenty-six healthy controls (HC) and 25 SCZ were asked to judge whether two agents presented only by point-light displays were communicating or acting independently. Task-related activity and functional connectivity of brain structures were examined with General Linear Model and Generalized Psychophysiological Interaction approach, respectively. HC were significantly more efficient in recognizing each type of action than SCZ. At the neural level, the activity of the right posterior superior temporal sulcus (pSTS) was observed to be higher in HC compared with SCZ for communicative v. individual action processing. Importantly, increased connectivity of the right pSTS with structures associated with mentalizing (left pSTS) and mirroring networks (left frontal areas) was observed in HC, but not in SCZ, during the presentation of social interactions. Under-recruitment of the right pSTS, a structure known to have a pivotal role in social processing, may also be of importance for higher-order social cognitive deficits in SCZ. Furthermore, decreased task-related connectivity of the right pSTS may result in reduced use of additional sources of information (for instance motor resonance signals) during social cognitive processing in schizophrenia.

  15. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.

    Science.gov (United States)

    Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup

    2017-02-01

    We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.

  16. Predictive Coding Strategies for Invariant Object Recognition and Volitional Motion Control in Neuromorphic Agents

    Science.gov (United States)

    2015-09-02

    model for scene understanding was proposed based on deep convolutional neural networks to improve recognition accuracy. Facial expression recognition ...A deep-learning-based model for facial expression recognition was formulated. It could recognize emotional status of people regardless of...CVPRW), 2014 IEEE Conference on. IEEE, 2014. DISTRIBUTION A: Distribution approved for public release. 4 Facial Expression Recognition

  17. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    Science.gov (United States)

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.

  18. Quality control of structural MRI images applied using FreeSurfer - a hands-on workflow to rate motion artifacts

    Directory of Open Access Journals (Sweden)

    Lea Luise Backhausen

    2016-12-01

    Full Text Available In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g. FreeSurfer. This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e. determine participants with most severe artifacts. However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies.

  19. Modular finger and hand motion capturing system based on inertial and magnetic sensors

    Directory of Open Access Journals (Sweden)

    Valtin Markus

    2017-03-01

    Full Text Available The assessment of hand posture and kinematics is increasingly important in various fields. This includes the rehabilitation of stroke survivors with restricted hand function. This paper presents a modular, ambulatory measurement system for the assement of the remaining hand function and for closed-loop controlled therapy. The device is based on inertial sensors and utilizes up to five interchangeable sensor strips to achieve modularity and to simplify the sensor attachment. We introduce the modular hardware design and describe algorithms used to calculate the joint angles. Measurements with two experimental setups demonstrate the feasibility and the potential of such a tracking device.

  20. When Passive Feels Active--Delusion-Proneness Alters Self-Recognition in the Moving Rubber Hand Illusion.

    Science.gov (United States)

    Louzolo, Anaïs; Kalckert, Andreas; Petrovic, Predrag

    2015-01-01

    Psychotic patients have problems with bodily self-recognition such as the experience of self-produced actions (sense of agency) and the perception of the body as their own (sense of ownership). While it has been shown that such impairments in psychotic patients can be explained by hypersalient processing of external sensory input it has also been suggested that they lack normal efference copy in voluntary action. However, it is not known how problems with motor predictions like efference copy contribute to impaired sense of agency and ownership in psychosis or psychosis-related states. We used a rubber hand illusion based on finger movements and measured sense of agency and ownership to compute a bodily self-recognition score in delusion-proneness (indexed by Peters' Delusion Inventory - PDI). A group of healthy subjects (n=71) experienced active movements (involving motor predictions) or passive movements (lacking motor predictions). We observed a highly significant correlation between delusion-proneness and self-recognition in the passive conditions, while no such effect was observed in the active conditions. This was seen for both ownership and agency scores. The result suggests that delusion-proneness is associated with hypersalient external input in passive conditions, resulting in an abnormal experience of the illusion. We hypothesize that this effect is not present in the active condition because deficient motor predictions counteract hypersalience in psychosis proneness.

  1. Growing in Motion: The Circulation of Used Things on Second-hand Markets

    OpenAIRE

    Staffan Appelgren; Anna Bohlin

    2015-01-01

    From having been associated with poverty and low status, the commerce with second-hand goods in retro shops, flea markets, vintage boutiques and trade via Internet is expanding in Sweden as in many countries in the Global North. This article argues that a significant aspect of the recent interest in second-hand and reuse concerns the meaning fulness of circulation in social life. Using classic anthropological theory on how the circulation of material culture generates sociality, it focuses on...

  2. A qualitative motion analysis study of voluntary hand movement induced by music in patients with Rett syndrome.

    Science.gov (United States)

    Go, Tohshin; Mitani, Asako

    2009-01-01

    Patients with Rett syndrome are known to respond well to music irrespective of their physical and verbal disabilities. Therefore, the relationship between auditory rhythm and their behavior was investigated employing a two-dimensional motion analysis system. Ten female patients aged from three to 17 years were included. When music with a simple regular rhythm started, body rocking appeared automatically in a back and forth direction in all four patients who showed the same rocking motion as their stereotyped movement. Through this body rocking, voluntary movement of the hand increased gradually, and finally became sufficient to beat a tambourine. However, the induction of body rocking by music was not observed in the other six patients who did not show stereotyped body rocking in a back and forth direction. When the music stopped suddenly, voluntary movement of the hand disappeared. When the music changed from a simple regular rhythm to a continuous tone without an auditory rhythm, the periodic movement of both the hand and body prolonged. Auditory rhythm shows a close relationship with body movement and facilitates synchronized body movement. This mechanism was demonstrated to be preserved in some patients with Rett syndrome, and stimulation with music could be utilized for their rehabilitation.

  3. A qualitative motion analysis study of voluntary hand movement induced by music in patients with Rett syndrome

    Directory of Open Access Journals (Sweden)

    Tohshin Go

    2009-10-01

    Full Text Available Tohshin Go1, Asako Mitani21Center for Baby Science, Doshisha University, Kizugawa, Kyoto, Japan; 2Independent Music Therapist (Poco A Poco Music Room, Tokyo, JapanAbstract: Patients with Rett syndrome are known to respond well to music irrespective of their physical and verbal disabilities. Therefore, the relationship between auditory rhythm and their behavior was investigated employing a two-dimensional motion analysis system. Ten female patients aged from three to 17 years were included. When music with a simple regular rhythm started, body rocking appeared automatically in a back and forth direction in all four patients who showed the same rocking motion as their stereotyped movement. Through this body rocking, voluntary movement of the hand increased gradually, and finally became sufficient to beat a tambourine. However, the induction of body rocking by music was not observed in the other six patients who did not show stereotyped body rocking in a back and forth direction. When the music stopped suddenly, voluntary movement of the hand disappeared. When the music changed from a simple regular rhythm to a continuous tone without an auditory rhythm, the periodic movement of both the hand and body prolonged. Auditory rhythm shows a close relationship with body movement and facilitates synchronized body movement. This mechanism was demonstrated to be preserved in some patients with Rett syndrome, and stimulation with music could be utilized for their rehabilitation.Keywords: Rett syndrome, music, auditory rhythm, stereotyped movement, body rocking, voluntary movement

  4. A Hands-on Exploration of the Retrograde Motion of Mars as Seen from the Earth

    Science.gov (United States)

    Pincelli, M. M.; Otranto, S.

    2013-01-01

    In this paper, we propose a set of activities based on the use of a celestial simulator to gain insights into the retrograde motion of Mars as seen from the Earth. These activities provide a useful link between the heliocentric concepts taught in schools and those tackled in typical introductory physics courses based on classical mechanics for…

  5. Growing in Motion: The Circulation of Used Things on Second-hand Markets

    Directory of Open Access Journals (Sweden)

    Staffan Appelgren

    2015-03-01

    Full Text Available From having been associated with poverty and low status, the commerce with second-hand goods in retro shops, flea markets, vintage boutiques and trade via Internet is expanding in Sweden as in many countries in the Global North. This article argues that a significant aspect of the recent interest in second-hand and reuse concerns the meaning fulness of circulation in social life. Using classic anthropological theory on how the circulation of material culture generates sociality, it focuses on how second-hand things are transformed by their circulation. Rather than merely having cultural biographies, second-hand things are reconfigured through their shifts between different social contexts in a process that here is understood as a form of growing. Similar to that of an organism, this growth is continuous, irreversible and dependent on forces both internal and external to it. What emerges is a category of things that combine elements of both commodities and gifts, as these have been theorized within anthropology. While first cycle commodities are purified of their sociality, the hybrid second-hand thing derives its ontological status as well as social and commercial value precisely from retaining "gift qualities", produced by its circulation.

  6. Repeatability of grasp recognition for robotic hand prosthesis control based on sEMG data.

    Science.gov (United States)

    Palermo, Francesca; Cognolato, Matteo; Gijsberts, Arjan; Muller, Henning; Caputo, Barbara; Atzori, Manfredo

    2017-07-01

    Control methods based on sEMG obtained promising results for hand prosthetics. Control system robustness is still often inadequate and does not allow the amputees to perform a large number of movements useful for everyday life. Only few studies analyzed the repeatability of sEMG classification of hand grasps. The main goals of this paper are to explore repeatability in sEMG data and to release a repeatability database with the recorded experiments. The data are recorded from 10 intact subjects repeating 7 grasps 12 times, twice a day for 5 days. The data are publicly available on the Ninapro web page. The analysis for the repeatability is based on the comparison of movement classification accuracy in several data acquisitions and for different subjects. The analysis is performed using mean absolute value and waveform length features and a Random Forest classifier. The accuracy obtained by training and testing on acquisitions at different times is on average 27.03% lower than training and testing on the same acquisition. The results obtained by training and testing on different acquisitions suggest that previous acquisitions can be used to train the classification algorithms. The inter-subject variability is remarkable, suggesting that specific characteristics of the subjects can affect repeatibility and sEMG classification accuracy. In conclusion, the results of this paper can contribute to develop more robust control systems for hand prostheses, while the presented data allows researchers to test repeatability in further analyses.

  7. A multi-DOF robotic exoskeleton interface for hand motion assistance.

    Science.gov (United States)

    Iqbal, Jamshed; Tsagarakis, Nikos G; Caldwell, Darwin G

    2011-01-01

    This paper outlines the design and development of a robotic exoskeleton based rehabilitation system. A portable direct-driven optimized hand exoskeleton system has been proposed. The optimization procedure primarily based on matching the exoskeleton and finger workspaces guided the system design. The selection of actuators for the proposed system has emerged as a result of experiments with users of different hand sizes. Using commercial sensors, various hand parameters, e.g. maximum and average force levels have been measured. The results of these experiments have been mapped directly to the mechanical design of the system. An under-actuated optimum mechanism has been analysed followed by the design and realization of the first prototype. The system provides both position and force feedback sensory information which can improve the outcomes of a professional rehabilitation exercise.

  8. A neuromorphic architecture for object recognition and motion anticipation using burst-STDP.

    Directory of Open Access Journals (Sweden)

    Andrew Nere

    Full Text Available In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP. STDP is responsible for the strengthening (or weakening of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips.

  9. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  10. Identification of Object Dynamics Using Hand Worn Motion and Force Sensors

    Directory of Open Access Journals (Sweden)

    Henk G. Kortier

    2016-11-01

    Full Text Available Emerging microelectromechanical system (MEMS-based sensors become much more applicable for on-body measurement purposes lately. Especially, the development of a finger tip-sized tri-axial force sensor gives the opportunity to measure interaction forces between the human hand and environmental objects. We have developed a new prototype device that allows simultaneous 3D force and movement measurements at the finger and thumb tips. The combination of interaction forces and movements makes it possible to identify the dynamical characteristics of the object being handled by the hand. With this device attached to the hand, a subject manipulated mass and spring objects under varying conditions. We were able to identify and estimate the weight of two physical mass objects (0.44 kg: 29 . 3 % ± 18 . 9 % and 0.28 kg: 19 . 7 % ± 10 . 6 % and the spring constant of a physical spring object ( 16 . 3 % ± 12 . 6 % . The system is a first attempt to quantify the interactions of the hand with the environment and has many potential applications in rehabilitation, ergonomics and sports.

  11. Real Time Hand Motion Reconstruction System for Trans-Humeral Amputees Using EEG and EMG

    Directory of Open Access Journals (Sweden)

    Jacobo Fernandez-Vargas

    2016-08-01

    Full Text Available Predicting a hand’s position using only biosignals is a complex problem that has not been completely solved. The only reliable solutions currently available require invasive surgery. The attempts using non-invasive technologies are rare, and usually have led to lower correlation values between the real and the reconstructed position than those required for real-world applications. In this study, we propose a solution for reconstructing the hand’s position in three dimensions using EEG and EMG to detect from the shoulder area. This approach would be valid for most trans-humeral amputees. In order to find the best solution, we tested four different architectures for the system based on artificial neural networks. Our results show that it is possible to reconstruct the hand’s motion trajectory with a correlation value up to 0.809 compared to a typical value in the literature of 0.6. We also demonstrated that both EEG and EMG contribute jointly to the motion reconstruction. Furthermore, we discovered that the system architectures do not change the results radically. In addition, our results suggest that different motions may have different brain activity patterns that could be detected through EEG. Finally, we suggest a method to study non-linear relations in the brain through the EEG signals, which may lead to a more accurate system.

  12. Dexterous hand gestures recognition based on low-density sEMG signals for upper-limb forearm amputees

    Directory of Open Access Journals (Sweden)

    John Jairo Villarejo Mayor

    2017-08-01

    Full Text Available Abstract Introduction Intuitive prosthesis control is one of the most important challenges in order to reduce the user effort in learning how to use an artificial hand. This work presents the development of a novel method for pattern recognition of sEMG signals able to discriminate, in a very accurate way, dexterous hand and fingers movements using a reduced number of electrodes, which implies more confidence and usability for amputees. Methods The system was evaluated for ten forearm amputees and the results were compared with the performance of able-bodied subjects. Multiple sEMG features based on fractal analysis (detrended fluctuation analysis and Higuchi’s fractal dimension combined with traditional magnitude-based features were analyzed. Genetic algorithms and sequential forward selection were used to select the best set of features. Support vector machine (SVM, K-nearest neighbors (KNN and linear discriminant analysis (LDA were analyzed to classify individual finger flexion, hand gestures and different grasps using four electrodes, performing contractions in a natural way to accomplish these tasks. Statistical significance was computed for all the methods using different set of features, for both groups of subjects (able-bodied and amputees. Results The results showed average accuracy up to 99.2% for able-bodied subjects and 98.94% for amputees using SVM, followed very closely by KNN. However, KNN also produces a good performance, as it has a lower computational complexity, which implies an advantage for real-time applications. Conclusion The results show that the method proposed is promising for accurately controlling dexterous prosthetic hands, providing more functionality and better acceptance for amputees.

  13. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  14. A New Profile Shape Matching Stereovision Algorithm for Real-time Human Pose and Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2014-02-01

    Full Text Available This paper presents a new profile shape matching stereovision algorithm that is designed to extract 3D information in real time. This algorithm obtains 3D information by matching profile intensity shapes of each corresponding row of the stereo image pair. It detects the corresponding matching patterns of the intensity profile rather than the intensity values of individual pixels or pixels in a small neighbourhood. This approach reduces the effect of the intensity and colour variations caused by lighting differences. As with all real-time vision algorithms, there is always a trade-off between accuracy and processing speed. This algorithm achieves a balance between the two to produce accurate results for real-time applications. To demonstrate its performance, the proposed algorithm is tested for human pose and hand gesture recognition to control a smart phone and an entertainment system.

  15. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  16. Work hands and feet in motion on the vertical ladder into the prosthesis disabled lower limb

    Directory of Open Access Journals (Sweden)

    Li Yugang

    2012-08-01

    Full Text Available The influence of the conditions of ascent and descent on the stairs with disabilities is shown. The study involved 12 persons with lower limb prosthetic right or left foot. The purpose was to determine the specific conditions of disabled people in the process of adaptation to the complicated conditions of their life. Underlying this approach is the study of the biomechanical characteristics the movements of disabled people in special terms. The factors that influence the effectiveness of the disability movement on the stairs is shown. These include slope angle, a compensatory effort of the hands and the ability to maintain balance while moving. These studies are the basis for a meaningful solution to improve the disability movement in the complicated conditions.

  17. Recognition

    DEFF Research Database (Denmark)

    Gimmler, Antje

    2017-01-01

    In this article, I shall examine the cognitive, heuristic and theoretical functions of the concept of recognition. To evaluate both the explanatory power and the limitations of a sociological concept, the theory construction must be analysed and its actual productivity for sociological theory mus...

  18. Motion tracking and electromyography assist the removal of mirror hand contributions to fNIRS images acquired during a finger tapping task performed by children with cerebral palsy

    Science.gov (United States)

    Hervey, Nathan; Khan, Bilal; Shagman, Laura; Tian, Fenghua; Delgado, Mauricio R.; Tulchin-Francis, Kirsten; Shierk, Angela; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J.; Liu, Hanli; MacFarlane, Duncan; Alexandrakis, George

    2013-03-01

    Functional neurological imaging has been shown to be valuable in evaluating brain plasticity in children with cerebral palsy (CP). In recent studies it has been demonstrated that functional near-infrared spectroscopy (fNIRS) is a viable and sensitive method for imaging motor cortex activities in children with CP. However, during unilateral finger tapping tasks children with CP often exhibit mirror motions (unintended motions in the non-tapping hand), and current fNIRS image formation techniques do not account for this. Therefore, the resulting fNIRS images contain activation from intended and unintended motions. In this study, cortical activity was mapped with fNIRS on four children with CP and five controls during a finger tapping task. Finger motion and arm muscle activation were concurrently measured using motion tracking cameras and electromyography (EMG). Subject-specific regressors were created from motion capture and EMG data and used in a general linear model (GLM) analysis in an attempt to create fNIRS images representative of different motions. The analysis provided an fNIRS image representing activation due to motion and muscle activity for each hand. This method could prove to be valuable in monitoring brain plasticity in children with CP by providing more consistent images between measurements. Additionally, muscle effort versus cortical effort was compared between control and CP subjects. More cortical effort was required to produce similar muscle effort in children with CP. It is possible this metric could be a valuable diagnostic tool in determining response to treatment.

  19. Waving real hand gestures recorded by wearable motion sensors to a virtual car and driver in a mixed-reality parking game

    NARCIS (Netherlands)

    Bannach, D.; Amft, O.D.; Kunze, K.S.; Heinz, E.A.; Tröster, G.; Lukowicz, P.

    2007-01-01

    We envision to add context awareness and ambient intelligence to edutainment and computer gaming applications in general. This requires mixed-reality setups and ever-higher levels of immersive human-computer interaction. Here, we focus on the automatic recognition of natural human hand gestures

  20. Eye Tracking Reveals a Crucial Role for Facial Motion in Recognition of Faces by Infants

    Science.gov (United States)

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was…

  1. Camera Motion and Surrounding Scene Appearance as Context for Action Recognition

    KAUST Repository

    Heilbron, Fabian Caba; Thabet, Ali Kassem; Niebles, Juan Carlos; Ghanem, Bernard

    2015-01-01

    This paper describes a framework for recognizing human actions in videos by incorporating a new set of visual cues that represent the context of the action. We develop a weak foreground-background segmentation approach in order to robustly extract not only foreground features that are focused on the actors, but also global camera motion and contextual scene information. Using dense point trajectories, our approach separates and describes the foreground motion from the background, represents the appearance of the extracted static background, and encodes the global camera motion that interestingly is shown to be discriminative for certain action classes. Our experiments on four challenging benchmarks (HMDB51, Hollywood2, Olympic Sports, and UCF50) show that our contextual features enable a significant performance improvement over state-of-the-art algorithms.

  2. Camera Motion and Surrounding Scene Appearance as Context for Action Recognition

    KAUST Repository

    Heilbron, Fabian Caba

    2015-04-17

    This paper describes a framework for recognizing human actions in videos by incorporating a new set of visual cues that represent the context of the action. We develop a weak foreground-background segmentation approach in order to robustly extract not only foreground features that are focused on the actors, but also global camera motion and contextual scene information. Using dense point trajectories, our approach separates and describes the foreground motion from the background, represents the appearance of the extracted static background, and encodes the global camera motion that interestingly is shown to be discriminative for certain action classes. Our experiments on four challenging benchmarks (HMDB51, Hollywood2, Olympic Sports, and UCF50) show that our contextual features enable a significant performance improvement over state-of-the-art algorithms.

  3. Classifying Multiple Types of Hand Motions Using Electrocorticography During Intraoperative Awake Craniotomy & Seizure Monitoring Processes - Case Studies

    Directory of Open Access Journals (Sweden)

    Tao eXie

    2015-10-01

    Full Text Available In this work, some case studies were conducted toclassify several kinds of hand motions from electrocorticography(ECoG signals during intraoperative awake craniotomy &extraoperative seizure monitoring processes. Four subjects (P1,P2 with intractable epilepsy during seizure monitoring and P3,P4 with brain tumor during awake craniotomy participatedin the experiments. Subjects performed three types of handmotions (Grasp, Thumb-finger motion and Index-finger motioncontralateral to the motor cortex covered with ECoG electrodes.Two methods were used for signal processing. Method I:autoregressive (AR model with burg method was applied toextract features, and additional waveform length (WL featurehas been considered, finally the linear discriminative analysis(LDA was used as the classifier. Method II: stationary subspaceanalysis (SSA was applied for data preprocessing, and thecommon spatial pattern (CSP was used for feature extractionbefore LDA decoding process. Applying method I, the threeclassaccuracy of P1□P4 were 90.17%, 96.00%, 91.77% and92.95% respectively. For method II, the three-class accuracy ofP1□P4 were 72.00%, 93.17%, 95.22% and 90.36% respectively.This study verified the possibility of decoding multiple handmotion types during an awake craniotomy, which is the firststep towards dexterous neuroprosthetic control during surgicalimplantation, in order to verify the optimal placement of electrodes.The accuracy during awake craniotomy was comparableto results during seizure monitoring. This study also indicatedthat ECoG was a promising approach for precise identificationof eloquent cortex during awake craniotomy, and might forma promising BCI system that could benefit both patients andneurosurgeons.

  4. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  5. Cinematic Motion by Hand

    DEFF Research Database (Denmark)

    Graca, Marina Estela

    2006-01-01

    Within Cinema, animation always had an unclear relation with live-action recording since its very beginning. We learned – helped by ASIFA (International Animated Film Association) – that we should separate one from the other and we also realized that we (still) don’t have a general theory of cinema...... that embraces both. Yet, over the last years, animation and live-action footage became completely tangled in cinematic productions. Obviously, this means that each of them is just a technical strategy supported by its own specialists and as one became dominant, the other turned out to be marginal. But what...

  6. Projective Structure from Two Uncalibrated Images: Structure from Motion and Recognition

    Science.gov (United States)

    1992-09-01

    correspondence between points in Maybank 1990). The question, therefore, is why look for both views more of a problem, and hence, may make the...plane is fixed with respect to the 1987, Faugeras, Luong and Maybank 1992). The prob- camera coordinate frame. A rigid camera motion, there- lem of...the second reference Rieger-Lawton 1985, Faugeras and Maybank 1990, Hil- plane (assuming the four object points Pi, j = 1, ...,4, dreth 1991, Faugeras

  7. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Regina Lionnie

    2013-09-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing  methods  used  in  a  hand  gesture  recognition  system.  The  preprocessing methods are based on the combinations ofseveral image processing operations,  namely  edge  detection,  low  pass  filtering,  histogram  equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possibleclasses. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  8. Modeling Attitude Dynamics in Simulink: A Study of the Rotational and Translational Motion of a Spacecraft Given Torques and Impulses Generated by RMS Hand Controllers

    Science.gov (United States)

    Mauldin, Rebecca H.

    2010-01-01

    In order to study and control the attitude of a spacecraft, it is necessary to understand the natural motion of a body in orbit. Assuming a spacecraft to be a rigid body, dynamics describes the complete motion of the vehicle by the translational and rotational motion of the body. The Simulink Attitude Analysis Model applies the equations of rigid body motion to the study of a spacecraft?s attitude in orbit. Using a TCP/IP connection, Matlab reads the values of the Remote Manipulator System (RMS) hand controllers and passes them to Simulink as specified torque and impulse profiles. Simulink then uses the governing kinematic and dynamic equations of a rigid body in low earth orbit (LE0) to plot the attitude response of a spacecraft for five seconds given known applied torques and impulses, and constant principal moments of inertia.

  9. The Application of Leap Motion in Astronaut Virtual Training

    Science.gov (United States)

    Qingchao, Xie; Jiangang, Chao

    2017-03-01

    With the development of computer vision, virtual reality has been applied in astronaut virtual training. As an advanced optic equipment to track hand, Leap Motion can provide precise and fluid tracking of hands. Leap Motion is suitable to be used as gesture input device in astronaut virtual training. This paper built an astronaut virtual training based Leap Motion, and established the mathematics model of hands occlusion. At last the ability of Leap Motion to handle occlusion was analysed. A virtual assembly simulation platform was developed for astronaut training, and occlusion gesture would influence the recognition process. The experimental result can guide astronaut virtual training.

  10. Two-dimensional laser servoing for precision motion control of an ODV robotic license plate recognition system

    Science.gov (United States)

    Song, Zhen; Moore, Kevin L.; Chen, YangQuan; Bahl, Vikas

    2003-09-01

    As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.

  11. Lip motion recognition of speaker based on SIFT%基于SIFT的说话人唇动识别

    Institute of Scientific and Technical Information of China (English)

    马新军; 吴晨晨; 仲乾元; 李园园

    2017-01-01

    Aiming at the problem that the lip feature dimension is too high and sensitive to the scale space,a technique based on the Scale-Invariant Feature Transform (SIFT) algorithm was proposed to carry out the speaker authentication.Firstly,a simple video frame image neat algorithm was proposed to adjust the length of the lip video to the same length,and the representative lip motion pictures were extracted.Then,a new algorithm based on key points of SIFT was proposed to extract the texture and motion features.After the integration of Principal Component Analysis (PCA) algorithm,the typical lip motion features were obtained for authentication.Finally,a simple classification algorithm was presented according to the obtained features.The experimental results show that compared to the common Local Binary Pattern (LBP) feature and the Histogram of Oriental Gradient (HOG) feature,the False Acceptance Rate (FAR) and False Rejection Rate (FRR) of the proposed feature extraction algorithm are better,which proves that the whole speaker lip motion recognition algorithm is effective and can get the ideal results.%针对唇部特征提取维度过高以及对尺度空间敏感的问题,提出了一种基于尺度不变特征变换(SIFT)算法作特征提取来进行说话人身份认证的技术.首先,提出了一种简单的视频帧图片规整算法,将不同长度的唇动视频规整到同一的长度,提取出具有代表性的唇动图片;然后,提出一种在SIFT关键点的基础上,进行纹理和运动特征的提取算法,并经过主成分分析(PCA)算法的整合,最终得到具有代表性的唇动特征进行认证;最后,根据所得到的特征,提出了一种简单的分类算法.实验结果显示,和常见的局部二元模式(LBP)特征和方向梯度直方图(HOG)特征相比较,该特征提取算法的错误接受率(FAR)和错误拒绝率(FRR)表现更佳.说明整个说话人唇动特征识别算法是有效的,能够得到较为理想的结果.

  12. The Combined Effects of Adaptive Control and Virtual Reality on Robot-Assisted Fine Hand Motion Rehabilitation in Chronic Stroke Patients: A Case Study.

    Science.gov (United States)

    Huang, Xianwei; Naghdy, Fazel; Naghdy, Golshah; Du, Haiping; Todd, Catherine

    2018-01-01

    Robot-assisted therapy is regarded as an effective and reliable method for the delivery of highly repetitive training that is needed to trigger neuroplasticity following a stroke. However, the lack of fully adaptive assist-as-needed control of the robotic devices and an inadequate immersive virtual environment that can promote active participation during training are obstacles hindering the achievement of better training results with fewer training sessions required. This study thus focuses on these research gaps by combining these 2 key components into a rehabilitation system, with special attention on the rehabilitation of fine hand motion skills. The effectiveness of the proposed system is tested by conducting clinical trials on a chronic stroke patient and verified through clinical evaluation methods by measuring the key kinematic features such as active range of motion (ROM), finger strength, and velocity. By comparing the pretraining and post-training results, the study demonstrates that the proposed method can further enhance the effectiveness of fine hand motion rehabilitation training by improving finger ROM, strength, and coordination. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  13. Hand interception of occluded motion in humans: a test of model-based vs. on-line control.

    Science.gov (United States)

    La Scaleia, Barbara; Zago, Myrka; Lacquaniti, Francesco

    2015-09-01

    Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. Copyright © 2015 the American Physiological Society.

  14. Motion tracking and electromyography-assisted identification of mirror hand contributions to functional near-infrared spectroscopy images acquired during a finger-tapping task performed by children with cerebral palsy

    Science.gov (United States)

    Hervey, Nathan; Khan, Bilal; Shagman, Laura; Tian, Fenghua; Delgado, Mauricio R.; Tulchin-Francis, Kirsten; Shierk, Angela; Roberts, Heather; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J.; Liu, Hanli; MacFarlane, Duncan; Alexandrakis, George

    2014-01-01

    Abstract. Recent studies have demonstrated functional near-infrared spectroscopy (fNIRS) to be a viable and sensitive method for imaging sensorimotor cortex activity in children with cerebral palsy (CP). However, during unilateral finger tapping, children with CP often exhibit unintended motions in the nontapping hand, known as mirror motions, which confuse the interpretation of resulting fNIRS images. This work presents a method for separating some of the mirror motion contributions to fNIRS images and demonstrates its application to fNIRS data from four children with CP performing a finger-tapping task with mirror motions. Finger motion and arm muscle activity were measured simultaneously with fNIRS signals using motion tracking and electromyography (EMG), respectively. Subsequently, subject-specific regressors were created from the motion capture or EMG data and independent component analysis was combined with a general linear model to create an fNIRS image representing activation due to the tapping hand and one image representing activation due to the mirror hand. The proposed method can provide information on how mirror motions contribute to fNIRS images, and in some cases, it helps remove mirror motion contamination from the tapping hand activation images. PMID:26157980

  15. Hand Grasping Synergies As Biometrics.

    Science.gov (United States)

    Patel, Vrajeshri; Thukral, Poojita; Burns, Martin K; Florescu, Ionut; Chandramouli, Rajarathnam; Vinjamuri, Ramana

    2017-01-01

    Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements). Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic). Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies) from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies-postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security.

  16. Hand Grasping Synergies As Biometrics

    Directory of Open Access Journals (Sweden)

    Ramana Vinjamuri

    2017-05-01

    Full Text Available Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements. Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic. Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies—postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security.

  17. A qualitative motion analysis study of voluntary hand movement induced by music in patients with Rett syndrome

    OpenAIRE

    Go, T

    2009-01-01

    Tohshin Go1, Asako Mitani21Center for Baby Science, Doshisha University, Kizugawa, Kyoto, Japan; 2Independent Music Therapist (Poco A Poco Music Room), Tokyo, JapanAbstract: Patients with Rett syndrome are known to respond well to music irrespective of their physical and verbal disabilities. Therefore, the relationship between auditory rhythm and their behavior was investigated employing a two-dimensional motion analysis system. Ten female patients aged from three to 17 years were included. W...

  18. The effect of time on EMG classification of hand motions in able-bodied and transradial amputees

    DEFF Research Database (Denmark)

    Waris, Asim; Niazi, Imran Khan; Jamil, Mohsin

    2018-01-01

    While several studies have demonstrated the short-term performance of pattern recognition systems, long-term investigations are very limited. In this study, we investigated changes in classification performance over time. Ten able-bodied individuals and six amputees took part in this study. EMG s...... difference between training and testing day increases. Furthermore, for iEMG, performance in amputees was directly proportional to the size of the residual limb.......While several studies have demonstrated the short-term performance of pattern recognition systems, long-term investigations are very limited. In this study, we investigated changes in classification performance over time. Ten able-bodied individuals and six amputees took part in this study. EMG...... was computed for all possible combinations between the days. For all subjects, surface sEMG (7.2 ± 7.6%), iEMG (11.9 ± 9.1%) and cEMG (4.6 ± 4.8%) were significantly different (P 

  19. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Science.gov (United States)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  20. An Alternative Myoelectric Pattern Recognition Approach for the Control of Hand Prostheses: A Case Study of Use in Daily Life by a Dysmelia Subject

    Science.gov (United States)

    Ahlberg, Johan; Lendaro, Eva; Hermansson, Liselotte; Håkansson, Bo; Ortiz-Catalan, Max

    2018-01-01

    The functionality of upper limb prostheses can be improved by intuitive control strategies that use bioelectric signals measured at the stump level. One such strategy is the decoding of motor volition via myoelectric pattern recognition (MPR), which has shown promising results in controlled environments and more recently in clinical practice. Moreover, not much has been reported about daily life implementation and real-time accuracy of these decoding algorithms. This paper introduces an alternative approach in which MPR allows intuitive control of four different grips and open/close in a multifunctional prosthetic hand. We conducted a clinical proof-of-concept in activities of daily life by constructing a self-contained, MPR-controlled, transradial prosthetic system provided with a novel user interface meant to log errors during real-time operation. The system was used for five days by a unilateral dysmelia subject whose hand had never developed, and who nevertheless learned to generate patterns of myoelectric activity, reported as intuitive, for multi-functional prosthetic control. The subject was instructed to manually log errors when they occurred via the user interface mounted on the prosthesis. This allowed the collection of information about prosthesis usage and real-time classification accuracy. The assessment of capacity for myoelectric control test was used to compare the proposed approach to the conventional prosthetic control approach, direct control. Regarding the MPR approach, the subject reported a more intuitive control when selecting the different grips, but also a higher uncertainty during proportional continuous movements. This paper represents an alternative to the conventional use of MPR, and this alternative may be particularly suitable for a certain type of amputee patients. Moreover, it represents a further validation of MPR with dysmelia cases. PMID:29637030

  1. Reverse control for humanoid robot task recognition.

    Science.gov (United States)

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  2. Design and Construction of a Bilateral Haptic System for the Remote Assessment of the Stiffness and Range of Motion of the Hand

    Directory of Open Access Journals (Sweden)

    Fabio Oscari

    2016-10-01

    Full Text Available The use of haptic devices in the rehabilitation of impaired limbs has become rather popular, given the proven effectiveness in promoting recovery. In a standard framework, such devices are used in rehabilitation centers, where patients interact with virtual tasks, presented on a screen. To track their sessions, kinematic/dynamic parameters or performance scores are recorded. However, as Internet access is now available at almost every home and in order to reduce the hospitalization time of the patient, the idea of doing rehabilitation at home is gaining wide consent. Medical care programs can be synchronized with the home rehabilitation device; patient data can be sent to the central server that could redirect to the therapist laptop (tele-healthcare. The controversial issue is that the recorded data do not actually represent the clinical conditions of the patients according to the medical assessment scales, forcing them to frequently undergo clinical tests at the hospital. To respond to this demand, we propose the use of a bilateral master/slave haptic system that could allow the clinician, who interacts with the master, to assess remotely and in real time the clinical conditions of the patient that uses the home rehabilitation device as the slave. In this paper, we describe a proof of concept to highlight the main issues of such an application, limited to one degree of freedom, and to the measure of the stiffness and range of motion of the hand.

  3. Effectiveness of the Gaze Direction Recognition Task for Chronic Neck Pain and Cervical Range of Motion: A Randomized Controlled Pilot Study

    Directory of Open Access Journals (Sweden)

    Satoshi Nobusako

    2012-01-01

    Full Text Available We developed a mental task with gaze direction recognition (GDR by which subjects observed neck rotation of another individual from behind and attempted to recognize the direction of gaze. A randomized controlled trial was performed in test (=9 and control (=8 groups of subjects with chronic neck pain undergoing physical therapy either with or without the GDR task carried out over 12 sessions during a three-week period. Primary outcome measures were defined as the active range of motion and pain on rotation of the neck. Secondary outcome measures were reaction time (RT and response accuracy in the GDR task group. ANOVA indicated a main effect for task session and group, and interaction of session. Post hoc testing showed that the GDR task group exhibited a significant simple main effect upon session, and significant sequential improvement of neck motion and relief of neck pain. Rapid effectiveness was significant in both groups. The GDR task group had a significant session-to-session reduction of RTs in correct responses. In conclusion, the GDR task we developed provides a promising rehabilitation measure for chronic neck pain.

  4. Advanced Myoelectric Control for Robotic Hand-Assisted Training: Outcome from a Stroke Patient.

    Science.gov (United States)

    Lu, Zhiyuan; Tong, Kai-Yu; Shin, Henry; Li, Sheng; Zhou, Ping

    2017-01-01

    A hand exoskeleton driven by myoelectric pattern recognition was designed for stroke rehabilitation. It detects and recognizes the user's motion intent based on electromyography (EMG) signals, and then helps the user to accomplish hand motions in real time. The hand exoskeleton can perform six kinds of motions, including the whole hand closing/opening, tripod pinch/opening, and the "gun" sign/opening. A 52-year-old woman, 8 months after stroke, made 20× 2-h visits over 10 weeks to participate in robot-assisted hand training. Though she was unable to move her fingers on her right hand before the training, EMG activities could be detected on her right forearm. In each visit, she took 4× 10-min robot-assisted training sessions, in which she repeated the aforementioned six motion patterns assisted by our intent-driven hand exoskeleton. After the training, her grip force increased from 1.5 to 2.7 kg, her pinch force increased from 1.5 to 2.5 kg, her score of Box and Block test increased from 3 to 7, her score of Fugl-Meyer (Part C) increased from 0 to 7, and her hand function increased from Stage 1 to Stage 2 in Chedoke-McMaster assessment. The results demonstrate the feasibility of robot-assisted training driven by myoelectric pattern recognition after stroke.

  5. Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Hee-Deok Yang

    2014-12-01

    Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

  6. Hybrid gesture recognition system for short-range use

    Science.gov (United States)

    Minagawa, Akihiro; Fan, Wei; Katsuyama, Yutaka; Takebe, Hiroaki; Ozawa, Noriaki; Hotta, Yoshinobu; Sun, Jun

    2012-03-01

    In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the camera's field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.

  7. Highly flexible self-powered sensors based on printed circuit board technology for human motion detection and gesture recognition.

    Science.gov (United States)

    Fuh, Yiin-Kuen; Ho, Hsi-Chun

    2016-03-04

    In this paper, we demonstrate a new integration of printed circuit board (PCB) technology-based self-powered sensors (PSSs) and direct-write, near-field electrospinning (NFES) with polyvinylidene fluoride (PVDF) micro/nano fibers (MNFs) as source materials. Integration with PCB technology is highly desirable for affordable mass production. In addition, we systematically investigate the effects of electrodes with intervals in the range of 0.15 mm to 0.40 mm on the resultant PSS output voltage and current. The results show that at a strain of 0.5% and 5 Hz, a PSS with a gap interval 0.15 mm produces a maximum output voltage of 3 V and a maximum output current of 220 nA. Under the same dimensional constraints, the MNFs are massively connected in series (via accumulation of continuous MNFs across the gaps ) and in parallel (via accumulation of parallel MNFs on the same gap) simultaneously. Finally, encapsulation in a flexible polymer with different interval electrodes demonstrated that electrical superposition can be realized by connecting MNFs collectively and effectively in serial/parallel patterns to achieve a high current and high voltage output, respectively. Further improvement in PSSs based on the effect of cooperativity was experimentally realized by rolling-up the device into a cylindrical shape, resulting in a 130% increase in power output due to the cooperative effect. We assembled the piezoelectric MNF sensors on gloves, bandages and stockings to fabricate devices that can detect different types of human motion, including finger motion and various flexing and extensions of an ankle. The firmly glued PSSs were tested on the glove and ankle respectively to detect and harvest the various movements and the output voltage was recorded as ∼1.5 V under jumping movement (one PSS) and ∼4.5 V for the clenched fist with five fingers bent concurrently (five PSSs). This research shows that piezoelectric MNFs not only have a huge impact on harvesting various external

  8. Highly flexible self-powered sensors based on printed circuit board technology for human motion detection and gesture recognition

    Science.gov (United States)

    Fuh, Yiin-Kuen; Ho, Hsi-Chun

    2016-03-01

    In this paper, we demonstrate a new integration of printed circuit board (PCB) technology-based self-powered sensors (PSSs) and direct-write, near-field electrospinning (NFES) with polyvinylidene fluoride (PVDF) micro/nano fibers (MNFs) as source materials. Integration with PCB technology is highly desirable for affordable mass production. In addition, we systematically investigate the effects of electrodes with intervals in the range of 0.15 mm to 0.40 mm on the resultant PSS output voltage and current. The results show that at a strain of 0.5% and 5 Hz, a PSS with a gap interval 0.15 mm produces a maximum output voltage of 3 V and a maximum output current of 220 nA. Under the same dimensional constraints, the MNFs are massively connected in series (via accumulation of continuous MNFs across the gaps ) and in parallel (via accumulation of parallel MNFs on the same gap) simultaneously. Finally, encapsulation in a flexible polymer with different interval electrodes demonstrated that electrical superposition can be realized by connecting MNFs collectively and effectively in serial/parallel patterns to achieve a high current and high voltage output, respectively. Further improvement in PSSs based on the effect of cooperativity was experimentally realized by rolling-up the device into a cylindrical shape, resulting in a 130% increase in power output due to the cooperative effect. We assembled the piezoelectric MNF sensors on gloves, bandages and stockings to fabricate devices that can detect different types of human motion, including finger motion and various flexing and extensions of an ankle. The firmly glued PSSs were tested on the glove and ankle respectively to detect and harvest the various movements and the output voltage was recorded as ∼1.5 V under jumping movement (one PSS) and ∼4.5 V for the clenched fist with five fingers bent concurrently (five PSSs). This research shows that piezoelectric MNFs not only have a huge impact on harvesting various external

  9. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  10. A natural human hand model

    NARCIS (Netherlands)

    Van Nierop, O.A.; Van der Helm, A.; Overbeeke, K.J.; Djajadiningrat, T.J.P.

    2007-01-01

    We present a skeletal linked model of the human hand that has natural motion. We show how this can be achieved by introducing a new biology-based joint axis that simulates natural joint motion and a set of constraints that reduce an estimated 150 possible motions to twelve. The model is based on

  11. Neural manual vs. robotic assisted mobilization to improve motion and reduce pain hypersensitivity in hand osteoarthritis: study protocol for a randomized controlled trial.

    Science.gov (United States)

    Villafañe, Jorge Hugo; Valdes, Kristin; Imperio, Grace; Borboni, Alberto; Cantero-Téllez, Raquel; Galeri, Silvia; Negrini, Stefano

    2017-05-01

    [Purpose] The aim of the present study is to detail the protocol for a randomised controlled trial (RCT) of neural manual vs. robotic assisted on pain in sensitivity as well as analyse the quantitative and qualitative movement of hand in subjects with hand osteoarthritis. [Subjects and Methods] Seventy-two patients, aged 50 to 90 years old of both genders, with a diagnosis of hand Osteoarthritis (OA), will be recruited. Two groups of 36 participants will receive an experimental intervention (neurodynamic mobilization intervention plus exercise) or a control intervention (robotic assisted passive mobilization plus exercise) for 12 sessions over 4 weeks. Assessment points will be at baseline, end of therapy, and 1 and 3 months after end of therapy. The outcomes of this intervention will be pain and determine the central pain processing mechanisms. [Result] Not applicable. [Conclusion] If there is a reduction in pain hypersensitivity in hand OA patients it can suggest that supraspinal pain-inhibitory areas, including the periaqueductal gray matter, can be stimulated by joint mobilization.

  12. Stiff Hands

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Stiff Hands Email to a friend * required fields ...

  13. Hand Infections

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Hand Infections Email to a friend * required fields ...

  14. Postura da mão e imagética motora: um estudo sobre reconhecimento de partes do corpo Hand posture and motor imagery: a body-part recognition study

    Directory of Open Access Journals (Sweden)

    AP Lameira

    2008-10-01

    Full Text Available OBJETIVOS: Assim como a imagética motora, o reconhecimento de partes do corpo aciona representações somatosensoriais específicas. Essas representações são ativadas implicitamente para comparar o corpo com o estímulo. No presente estudo, investigou-se a influência da informação proprioceptiva da postura no reconhecimento de partes do corpo (mãos e propõe-se a utilização dessa tarefa na reabilitação de pacientes neurológicos. MATERIAIS E MÉTODOS: Dez voluntários destros participaram do experimento. A tarefa era reconhecer a lateralidade de figuras da mão apresentada, em várias perspectivas e em vários ângulos de orientação. Para a figura da mão direita, o voluntário pressionava a tecla direita e para a figura da mão esquerda, a tecla esquerda. Os voluntários realizavam duas sessões: uma com as mãos na postura prona e outra com as mãos na postura supina. RESULTADOS: Os tempos de reação manual (TRM eram maiores para as vistas e orientações, nas quais é difícil realizar o movimento real, mostrando que durante a tarefa, existe um acionamento de representações motoras para comparar o corpo com o estímulo. Além disso, existe uma influência da postura do sujeito em vistas e ângulos específicos. CONCLUSÕES: Estes resultados mostram que representações motoras são ativadas para comparar o corpo com o estímulo e que a postura da mão influencia esta ressonância entre estímulo e parte do corpo.OBJECTIVE: Recognition of body parts activates specific somatosensory representations in a way that is similar to motor imagery. These representations are implicitly activated to compare the body with the stimulus. In the present study, we investigate the influence of proprioceptive information relating to body posture on the recognition of body parts (hands. It proposes that this task could be used for rehabilitation of neurological patients. METHODS: Ten right-handed volunteers participated in this experiment. The

  15. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    Science.gov (United States)

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  16. Effect of robotic-assisted three-dimensional repetitive motion to improve hand motor function and control in children with handwriting deficits: a nonrandomized phase 2 device trial.

    Science.gov (United States)

    Palsbo, Susan E; Hood-Szivek, Pamela

    2012-01-01

    We explored the efficacy of robotic technology in improving handwriting in children with impaired motor skills. Eighteen participants had impairments arising from cerebral palsy (CP), autism spectrum disorder (ASD), attention deficit disorder (ADD), attention deficit hyperactivity disorder (ADHD), or other disorders. The intervention was robotic-guided three-dimensional repetitive motion in 15-20 daily sessions of 25-30 min each over 4-8 wk. Fine motor control improved for the children with learning disabilities and those ages 9 or older but not for those with CP or under age 9. All children with ASD or ADHD referred for slow writing speed were able to increase speed while maintaining legibility. Three-dimensional, robot-assisted, repetitive motion training improved handwriting fluidity in children with mild to moderate fine motor deficits associated with ASD or ADHD within 10 hr of training. This dosage may not be sufficient for children with CP. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  17. Leap Motion Device Used to Control a Real Anthropomorphic Gripper

    Directory of Open Access Journals (Sweden)

    Ionel Staretu

    2016-06-01

    Full Text Available This paper presents for the first time the use of the Leap Motion device to control an anthropomorphic gripper with five fingers. First, a description of the Leap Motion device is presented, highlighting its main functional characteristics, followed by testing of its use for capturing the movements of a human hand's fingers in different configurations. Next, the HandCommander soft module and the Interface Controller application are described. The HandCommander is a software module created to facilitate interaction between a human hand and the GraspIT virtual environment, and the Interface Controller application is required to send motion data to the virtual environment and to test the communication protocol. For the test, a prototype of an anthropomorphic gripper with five fingers was made, including a proper hardware system of command and control, which is briefly presented in this paper. Following the creation of the prototype, the command system performance test was conducted under real conditions, evaluating the recognition efficiency of the objects to be gripped and the efficiency of the command and control strategies for the gripping process. The gripping test is exemplified by the gripping of an object, such as a screw spanner. It was found that the command system, both in terms of capturing human hand gestures with the Leap Motion device and effective object gripping, is operational. Suggestive figures are presented as examples.

  18. Hand Fractures

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is ... Hand Therapist? Media Find a Hand Surgeon Home Anatomy ... DESCRIPTION The bones of the hand serve as a framework. This framework supports the muscles that make the wrist and fingers move. When ...

  19. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  20. Hand Therapy

    Science.gov (United States)

    ... from conditions such as carpal tunnel syndrome and tennis elbow , as well as from chronic problems such as ... Tools Advice from a Certified Hand Therapist on Tennis Elbow Advice from a Certified Hand Therapist: Living with( ...

  1. Hand Anatomy

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is ... Hand Therapist? Media Find a Hand Surgeon Home Anatomy Bones Joints Muscles Nerves Vessels Tendons Anatomy The upper extremity is ...

  2. [Hand osteoarthritis].

    Science.gov (United States)

    Šenolt, Ladislav

    Hand osteoarthritis (OA) is a common chronic disorder causing pain and limitation of mobility of affected joints. The prevalence of hand OA increases with age and more often affects females. Clinical signs obviously do not correlate with radiographic findings - symptomatic hand OA affects approximately 26 % of adult subjects, but radiographic changes can be found in up to two thirds of females and half of males older than 55 years.Disease course differ among individual patients. Hand OA is a heterogeneous disease. Nodal hand OA is the most common subtype affecting interphalangeal joints, thumb base OA affects first carpometacarpal joint. Erosive OA represents a specific subtype of hand OA, which is associated with joint inflammation, more pain, functional limitation and erosive findings on radiographs.Treatment of OA is limited. Analgesics and nonsteroidal anti-inflammatory drugs are the only agents reducing symptoms. New insights into the pathogenesis of disease should contribute to the development of novel effective treatment of hand OA.

  3. Evaluating EMG Feature and Classifier Selection for Application to Partial-Hand Prosthesis Control

    Directory of Open Access Journals (Sweden)

    Adenike A. Adewuyi

    2016-10-01

    Full Text Available Pattern recognition-based myoelectric control of upper limb prostheses has the potential to restore control of multiple degrees of freedom. Though this control method has been extensively studied in individuals with higher-level amputations, few studies have investigated its effectiveness for individuals with partial-hand amputations. Most partial-hand amputees retain a functional wrist and the ability of pattern recognition-based methods to correctly classify hand motions from different wrist positions is not well studied. In this study, focusing on partial-hand amputees, we evaluate (1 the performance of non-linear and linear pattern recognition algorithms and (2 the performance of optimal EMG feature subsets for classification of four hand motion classes in different wrist positions for 16 non-amputees and 4 amputees. Our results show that linear discriminant analysis and linear and non-linear artificial neural networks perform significantly better than the quadratic discriminant analysis for both non-amputees and partial-hand amputees. For amputees, including information from multiple wrist positions significantly decreased error (p<0.001 but no further significant decrease in error occurred when more than 4, 2, or 3 positions were included for the extrinsic (p=0.07, intrinsic (p=0.06, or combined extrinsic and intrinsic muscle EMG (p=0.08, respectively. Finally, we found that a feature set determined by selecting optimal features from each channel outperformed the commonly used time domain (p<0.001 and time domain/autoregressive feature sets (p<0.01. This method can be used as a screening filter to select the features from each channel that provide the best classification of hand postures across different wrist positions.

  4. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  5. Robotic Hand

    Science.gov (United States)

    1993-01-01

    The Omni-Hand was developed by Ross-Hime Designs, Inc. for Marshall Space Flight Center (MSFC) under a Small Business Innovation Research (SBIR) contract. The multiple digit hand has an opposable thumb and a flexible wrist. Electric muscles called Minnacs power wrist joints and the interchangeable digits. Two hands have been delivered to NASA for evaluation for potential use on space missions and the unit is commercially available for applications like hazardous materials handling and manufacturing automation. Previous SBIR contracts resulted in the Omni-Wrist and Omni-Wrist II robotic systems, which are commercially available for spray painting, sealing, ultrasonic testing, as well as other uses.

  6. Side-View Face Recognition

    NARCIS (Netherlands)

    Santemiz, P.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier

    As a widely used biometrics, face recognition has many advantages such as being non-intrusive, natural and passive. On the other hand, in real-life scenarios with uncontrolled environment, pose variation up to side-view positions makes face recognition a challenging work. In this paper we discuss

  7. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    Science.gov (United States)

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  8. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    Directory of Open Access Journals (Sweden)

    Seongwan Kim

    2017-01-01

    Full Text Available The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor, usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  9. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    Science.gov (United States)

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  10. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  11. Deficient Biological Motion Perception in Schizophrenia: Results from a Motion Noise Paradigm

    Directory of Open Access Journals (Sweden)

    Jejoong eKim

    2013-07-01

    Full Text Available Background: Schizophrenia patients exhibit deficient processing of perceptual and cognitive information. However, it is not well understood how basic perceptual deficits contribute to higher level cognitive problems in this mental disorder. Perception of biological motion, a motion-based cognitive recognition task, relies on both basic visual motion processing and social cognitive processing, thus providing a useful paradigm to evaluate the potentially hierarchical relationship between these two levels of information processing. Methods: In this study, we designed a biological motion paradigm in which basic visual motion signals were manipulated systematically by incorporating different levels of motion noise. We measured the performances of schizophrenia patients (n=21 and healthy controls (n=22 in this biological motion perception task, as well as in coherent motion detection, theory of mind, and a widely used biological motion recognition task. Results: Schizophrenia patients performed the biological motion perception task with significantly lower accuracy than healthy controls when perceptual signals were moderately degraded by noise. A more substantial degradation of perceptual signals, through using additional noise, impaired biological motion perception in both groups. Performance levels on biological motion recognition, coherent motion detection and theory of mind tasks were also reduced in patients. Conclusion: The results from the motion-noise biological motion paradigm indicate that in the presence of visual motion noise, the processing of biological motion information in schizophrenia is deficient. Combined with the results of poor basic visual motion perception (coherent motion task and biological motion recognition, the association between basic motion signals and biological motion perception suggests a need to incorporate the improvement of visual motion perception in social cognitive remediation.

  12. Pattern recognition

    CERN Document Server

    Theodoridis, Sergios

    2003-01-01

    Pattern recognition is a scientific discipline that is becoming increasingly important in the age of automation and information handling and retrieval. Patter Recognition, 2e covers the entire spectrum of pattern recognition applications, from image analysis to speech recognition and communications. This book presents cutting-edge material on neural networks, - a set of linked microprocessors that can form associations and uses pattern recognition to ""learn"" -and enhances student motivation by approaching pattern recognition from the designer's point of view. A direct result of more than 10

  13. Hand eczema

    DEFF Research Database (Denmark)

    Ibler, K.S.; Jemec, G.B.E.; Flyvholm, M.-A.

    2012-01-01

    Background. Healthcare workers are at increased risk of developing hand eczema. Objectives. To investigate the prevalence and severity of self-reported hand eczema, and to relate the findings to demographic data, occupation, medical speciality, wards, shifts, and working hours. Patients/materials......Background. Healthcare workers are at increased risk of developing hand eczema. Objectives. To investigate the prevalence and severity of self-reported hand eczema, and to relate the findings to demographic data, occupation, medical speciality, wards, shifts, and working hours. Patients...... dermatitis, younger age, male sex (male doctors), and working hours. Eighty nine per cent of subjects reported mild/moderate lesions. Atopic dermatitis was the only factor significantly related to severity. Sick leave was reported by 8% of subjects, and notification to the authorities by 12%. Conclusions...... or severity, but cultural differences between professions with respect to coping with the eczema were significant. Atopic dermatitis was related to increased prevalence and severity, and preventive efforts should be made for healthcare workers with atopic dermatitis....

  14. Hand Osteoblastoma

    Directory of Open Access Journals (Sweden)

    M. Farzan

    2006-06-01

    Full Text Available Background and Aim: Osteoblastoma is one of the rarest primary bone tumors. Although, small bones of the hands and feet are the third most common location for this tumor, the hand involvement is very rare and few case observations were published in the English-language literature. Materials and Methods: In this study, we report five cases of benign osteoblastoma of the hand, 3 in metacarpals and two in phalanxes. The clinical feature is not specific. The severe nocturnal, salicylate-responsive pain is not present in patients with osteoblastoma. The pain is dull, persistent and less localized. The clinical course is usually long and there is often symptoms for months before medical attention are sought. Swelling is a more persistent finding in osteoblastoma of the hand that we found in all of our patients. The radiologic findings are indistinctive, so preoperative diagnosis based on X-ray appearance is difficult. In all of our 5 cases, we fail to consider osteoblastoma as primary diagnosis. Pathologically, osteoblastoma consisting of a well-vascularized connective tissue stroma in which there is active production of osteoid and primitive woven bone. Treatment depends on the stage and localization of the tumor. Curettage and bone grafting is sufficient in stage 1 or stage 2, but in stage 3 wide resection is necessary for prevention of recurrence. Osteosarcoma is the most important differential diagnosis that may lead to inappropriate operation.

  15. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  16. Anthropomorphic Robot Hand And Teaching Glove

    Science.gov (United States)

    Engler, Charles D., Jr.

    1991-01-01

    Robotic forearm-and-hand assembly manipulates objects by performing wrist and hand motions with nearly human grasping ability and dexterity. Imitates hand motions of human operator who controls robot in real time by programming via exoskeletal "teaching glove". Telemanipulator systems based on this robotic-hand concept useful where humanlike dexterity required. Underwater, high-radiation, vacuum, hot, cold, toxic, or inhospitable environments potential application sites. Particularly suited to assisting astronauts on space station in safely executing unexpected tasks requiring greater dexterity than standard gripper.

  17. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  18. Structural Motion Grammar for Universal Use of Leap Motion: Amusement and Functional Contents Focused

    Directory of Open Access Journals (Sweden)

    Byungseok Lee

    2018-01-01

    Full Text Available Motions using Leap Motion controller are not standardized while the use of it is spreading in media contents. Each content defines its own motions, thereby creating confusion for users. Therefore, to alleviate user inconvenience, this study categorized the commonly used motion by Amusement and Functional Contents and defined the Structural Motion Grammar that can be universally used based on the classification. To this end, the Motion Lexicon was defined, which is a fundamental motion vocabulary, and an algorithm that enables real-time recognition of Structural Motion Grammar was developed. Moreover, the proposed method was verified by user evaluation and quantitative comparison tests.

  19. A Control Strategy with Tactile Perception Feedback for EMG Prosthetic Hand

    Directory of Open Access Journals (Sweden)

    Changcheng Wu

    2015-01-01

    Full Text Available To improve the control effectiveness and make the prosthetic hand not only controllable but also perceivable, an EMG prosthetic hand control strategy was proposed in this paper. The control strategy consists of EMG self-learning motion recognition, backstepping controller with stiffness fuzzy observation, and force tactile representation. EMG self-learning motion recognition is used to reduce the influence on EMG signals caused by the uncertainty of the contacting position of the EMG sensors. Backstepping controller with stiffness fuzzy observation is used to realize the position control and grasp force control. Velocity proportional control in free space and grasp force tracking control in restricted space can be realized by the same controller. The force tactile representation helps the user perceive the states of the prosthetic hand. Several experiments were implemented to verify the effect of the proposed control strategy. The results indicate that the proposed strategy has effectiveness. During the experiments, the comments of the participants show that the proposed strategy is a better choice for amputees because of the improved controllability and perceptibility.

  20. Myelopathy hand in cervical radiculopathy

    International Nuclear Information System (INIS)

    Hosono, Noboru; Mukai, Yoshihiro; Takenaka, Shota; Fuji, Takeshi; Sakaura, Hironobu; Miwa, Toshitada; Makino, Takahiro

    2010-01-01

    The so-called 'myelopathy hand', or characteristic finger paralysis, often recognized in cervical compression myelopathy, has been considered a unique manifestation of cervical myelopathy. We used our original grip and release test, a 15-second test in which finger motion is captured with a digital camera, to investigate whether cervical radiculopathy has the same characteristics as myelopathy hand. Thirty patients with pure radiculopathy, id est (i.e.), who had radiating arm pain and evidence of corresponding nerve root impingement on X-ray images or MRI scans, but did not have spinal cord compression, served as the subjects. In contrast to other radiculopathies, C7 radiculopathy was manifested by a significant reduction in the number of finger motion cycles on the affected side in comparison with the unaffected side, the same as in myelopathy hand. Uncoordinated finger motion was significantly more frequent on the affected side in C6 radiculopathy than on the unaffected side. These findings contradict the conventional notion that myelopathy hand is a unique manifestation of cervical myelopathy, but some radiculopathies manifested the same kinds of finger paralysis observed in myelopathy hand. (author)

  1. Electromyography (EMG) signal recognition using combined discrete wavelet transform based adaptive neuro-fuzzy inference systems (ANFIS)

    Science.gov (United States)

    Arozi, Moh; Putri, Farika T.; Ariyanto, Mochammad; Khusnul Ari, M.; Munadi, Setiawan, Joga D.

    2017-01-01

    People with disabilities are increasing from year to year either due to congenital factors, sickness, accident factors and war. One form of disability is the case of interruptions of hand function. The condition requires and encourages the search for solutions in the form of creating an artificial hand with the ability as a human hand. The development of science in the field of neuroscience currently allows the use of electromyography (EMG) to control the motion of artificial prosthetic hand into the necessary use of EMG as an input signal to control artificial prosthetic hand. This study is the beginning of a significant research planned in the development of artificial prosthetic hand with EMG signal input. This initial research focused on the study of EMG signal recognition. Preliminary results show that the EMG signal recognition using combined discrete wavelet transform and Adaptive Neuro-Fuzzy Inference System (ANFIS) produces accuracy 98.3 % for training and 98.51% for testing. Thus the results can be used as an input signal for Simulink block diagram of a prosthetic hand that will be developed on next study. The research will proceed with the construction of artificial prosthetic hand along with Simulink program controlling and integrating everything into one system.

  2. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  3. Flexible Piezoelectric Sensor-Based Gait Recognition

    Directory of Open Access Journals (Sweden)

    Youngsu Cha

    2018-02-01

    Full Text Available Most motion recognition research has required tight-fitting suits for precise sensing. However, tight-suit systems have difficulty adapting to real applications, because people normally wear loose clothes. In this paper, we propose a gait recognition system with flexible piezoelectric sensors in loose clothing. The gait recognition system does not directly sense lower-body angles. It does, however, detect the transition between standing and walking. Specifically, we use the signals from the flexible sensors attached to the knee and hip parts on loose pants. We detect the periodic motion component using the discrete time Fourier series from the signal during walking. We adapt the gait detection method to a real-time patient motion and posture monitoring system. In the monitoring system, the gait recognition operates well. Finally, we test the gait recognition system with 10 subjects, for which the proposed system successfully detects walking with a success rate over 93 %.

  4. Compact Dexterous Robotic Hand

    Science.gov (United States)

    Lovchik, Christopher Scott (Inventor); Diftler, Myron A. (Inventor)

    2001-01-01

    A compact robotic hand includes a palm housing, a wrist section, and a forearm section. The palm housing supports a plurality of fingers and one or more movable palm members that cooperate with the fingers to grasp and/or release an object. Each flexible finger comprises a plurality of hingedly connected segments, including a proximal segment pivotally connected to the palm housing. The proximal finger segment includes at least one groove defining first and second cam surfaces for engagement with a cable. A plurality of lead screw assemblies each carried by the palm housing are supplied with power from a flexible shaft rotated by an actuator and output linear motion to a cable move a finger. The cable is secured within a respective groove and enables each finger to move between an opened and closed position. A decoupling assembly pivotally connected to a proximal finger segment enables a cable connected thereto to control movement of an intermediate and distal finger segment independent of movement of the proximal finger segment. The dexterous robotic hand closely resembles the function of a human hand yet is light weight and capable of grasping both heavy and light objects with a high degree of precision.

  5. Viewpoint Manifolds for Action Recognition

    Directory of Open Access Journals (Sweden)

    Souvenir Richard

    2009-01-01

    Full Text Available Abstract Action recognition from video is a problem that has many important applications to human motion analysis. In real-world settings, the viewpoint of the camera cannot always be fixed relative to the subject, so view-invariant action recognition methods are needed. Previous view-invariant methods use multiple cameras in both the training and testing phases of action recognition or require storing many examples of a single action from multiple viewpoints. In this paper, we present a framework for learning a compact representation of primitive actions (e.g., walk, punch, kick, sit that can be used for video obtained from a single camera for simultaneous action recognition and viewpoint estimation. Using our method, which models the low-dimensional structure of these actions relative to viewpoint, we show recognition rates on a publicly available dataset previously only achieved using multiple simultaneous views.

  6. sEMG-Based Gesture Recognition with Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhen Ding

    2018-06-01

    Full Text Available The traditional classification methods for limb motion recognition based on sEMG have been deeply researched and shown promising results. However, information loss during feature extraction reduces the recognition accuracy. To obtain higher accuracy, the deep learning method was introduced. In this paper, we propose a parallel multiple-scale convolution architecture. Compared with the state-of-art methods, the proposed architecture fully considers the characteristics of the sEMG signal. Larger sizes of kernel filter than commonly used in other CNN-based hand recognition methods are adopted. Meanwhile, the characteristics of the sEMG signal, that is, muscle independence, is considered when designing the architecture. All the classification methods were evaluated on the NinaPro database. The results show that the proposed architecture has the highest recognition accuracy. Furthermore, the results indicate that parallel multiple-scale convolution architecture with larger size of kernel filter and considering muscle independence can significantly increase the classification accuracy.

  7. Three-dimensional motion tracking correlates with skill level in upper gastrointestinal endoscopy

    DEFF Research Database (Denmark)

    Arnold, Sif H.; Svendsen, Morten Bo Søndergaard; Konge, Lars

    2015-01-01

    untrained medical students) were tested using a virtual reality simulator. A motion sensor was used to collect data regarding the distance between the hands, and height and movement of the scope hand. Test characteristics between groups were explored using Kruskal-Wallis H and Man-Whitney U exact tests......Background and study aim: Feedback is an essential part of training in upper gastrointestinal endoscopy. Virtual reality simulators provide limited feedback, focusing only on visual recognition with no feedback on the procedural part of training. Motion tracking identifies patterns of movement......, and this study aimed to explore the correlation between skill level and operator movement using an objective automated tool. Methods: In this medical education study, 37 operators (12 senior doctors who performed endoscopic retrograde cholangiopancreatography, 13 doctors with varying levels of experience, and 12...

  8. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  9. Online handwritten mathematical expression recognition

    Science.gov (United States)

    Büyükbayrak, Hakan; Yanikoglu, Berrin; Erçil, Aytül

    2007-01-01

    We describe a system for recognizing online, handwritten mathematical expressions. The system is designed with a user-interface for writing scientific articles, supporting the recognition of basic mathematical expressions as well as integrals, summations, matrices etc. A feed-forward neural network recognizes symbols which are assumed to be single-stroke and a recursive algorithm parses the expression by combining neural network output and the structure of the expression. Preliminary results show that writer-dependent recognition rates are very high (99.8%) while writer-independent symbol recognition rates are lower (75%). The interface associated with the proposed system integrates the built-in recognition capabilities of the Microsoft's Tablet PC API for recognizing textual input and supports conversion of hand-drawn figures into PNG format. This enables the user to enter text, mathematics and draw figures in a single interface. After recognition, all output is combined into one LATEX code and compiled into a PDF file.

  10. Teaching Motion with the Global Positioning System

    Science.gov (United States)

    Budisa, Marko; Planinsic, Gorazd

    2003-01-01

    We have used the GPS receiver and a PC interface to track different types of motion. Various hands-on experiments that enlighten the physics of motion at the secondary school level are suggested (visualization of 2D and 3D motion, measuring car drag coefficient and fuel consumption). (Contains 8 figures.)

  11. DESIGN REVIEW OF CAD MODELS USING A NUI LEAP MOTION SENSOR

    Directory of Open Access Journals (Sweden)

    GÎRBACIA Florin

    2015-06-01

    Full Text Available Natural User Interfaces (NUI is a relatively new area of research that aims to develop humancomputer interfaces, natural and intuitive, using voice commands, hand movements and gesture recognition, similar to communication between people which also implies body language and gestures. In this paper is presented a natural designed workspace which acquires the user's motion using a Leap Motion sensor and visualizes the CAD models using a CAVE-like 3D visualisation system. The user can modify complex CAD models using bimanual gesture commands in a 3D virtual environment. The developed bimanual gestures for rotate, pan, zoom and explode are presented. From the conducted experiments is established that Leap Motion NUI sensor provides an intuitive tool for design review of CAD models, performed even by users with no experience in CAD systems and virtual environments.

  12. Mobile user identity sensing using the motion sensor

    Science.gov (United States)

    Zhao, Xi; Feng, Tao; Xu, Lei; Shi, Weidong

    2014-05-01

    Employing mobile sensor data to recognize user behavioral activities has been well studied in recent years. However, to adopt the data as a biometric modality has rarely been explored. Existing methods either used the data to recognize gait, which is considered as a distinguished identity feature; or segmented a specific kind of motion for user recognition, such as phone picking-up motion. Since the identity and the motion gesture jointly affect motion data, to fix the gesture (walking or phone picking-up) definitively simplifies the identity sensing problem. However, it meanwhile introduces the complexity from gesture detection or requirement on a higher sample rate from motion sensor readings, which may draw the battery fast and affect the usability of the phone. In general, it is still under investigation that motion based user authentication in a large scale satisfies the accuracy requirement as a stand-alone biometrics modality. In this paper, we propose a novel approach to use the motion sensor readings for user identity sensing. Instead of decoupling the user identity from a gesture, we reasonably assume users have their own distinguishing phone usage habits and extract the identity from fuzzy activity patterns, represented by a combination of body movements whose signals in chains span in relative low frequency spectrum and hand movements whose signals span in relative high frequency spectrum. Then Bayesian Rules are applied to analyze the dependency of different frequency components in the signals. During testing, a posterior probability of user identity given the observed chains can be computed for authentication. Tested on an accelerometer dataset with 347 users, our approach has demonstrated the promising results.

  13. A systematic review of the etiopathogenesis of Kienböck's disease and a critical appraisal of its recognition as an occupational disease related to hand-arm vibration

    Directory of Open Access Journals (Sweden)

    Stahl Stéphane

    2012-11-01

    Full Text Available Abstract Background We systematically reviewed etiological factors of Kienböck’s disease (osteonecrosis of the lunate discussed in the literature in order to examine the justification for including Kienböck’s disease (KD in the European Listing of Occupational Diseases. Methods We searched the Ovid/Medline and the Cochrane Library for articles discussing the etiology of osteonecrosis of the lunate published since the first description of KD in 1910 and up until July 2012 in English, French or German. Literature was classified by the level of evidence presented, the etiopathological hypothesis discussed, and the author's conclusion about the role of the etiopathological hypothesis. The causal relationship between KD and hand-arm vibration was elucidated by the Bradford Hill criteria. Results A total of 220 references was found. Of the included 152 articles, 140 (92% reached the evidence level IV (case series. The four most frequently discussed factors were negative ulnar variance (n=72; 47%, primary arterial ischemia of the lunate (n=63; 41%, trauma (n=63; 41% and hand-arm vibration (n=53; 35%. The quality of the cohort studies on hand-arm vibration did not permit a meta-analysis to evaluate the strength of an association to KD. Evidence for the lack of consistency, plausibility and coherence of the 4 most frequently discussed etiopathologies was found. No evidence was found to support any of the nine Bradford Hill criteria for a causal relationship between KD and hand-arm vibration. Conclusions A systematic review of 220 articles on the etiopathology of KD and the application of the Bradford Hill criteria does not provide sufficient scientific evidence to confirm or refute a causal relationship between KD and hand-arm vibration. This currently suggests that, KD does not comply with the criteria of the International Labour Organization determining occupational diseases. However, research with a higher level of evidence is required to

  14. Mobilidade articular dos dedos não lesados pós-reparo em lesão dos tendões flexores da mão Joint range of motion of uninjured fingers after repairs to flexor tendon injuries of the hand

    Directory of Open Access Journals (Sweden)

    RB Rabelo

    2007-10-01

    Full Text Available OBJETIVO: Verificar a amplitude de movimento (ADM em mãos que sofreram reparo tendinoso dos músculos flexores superficial e profundo dos dedos, comparando os dados de cada dedo na mão lesada e entre mãos lesadas e não lesadas. MÉTODOS: Foi realizada a goniometria ativa em 15 pacientes e 120 dedos, 60 dedos de mãos lesadas e 60 de mãos controle não lesadas. Os sujeitos foram avaliados no momento da retirada da tala gessada, tendo sido realizada a movimentação precoce pelo método de Duran modificado. A partir dos dados goniométricos, foram registrados os valores do índice TAM (Total Active Motion dos dedos nas mãos lesadas e controle. Para análise dos dados, foi acessada a fórmula de índices funcionais proposta pela American Society for Surgery of the Hand (ASSH e para cálculo estatístico, foi escolhido o Modelo de Efeitos Mistos. RESULTADOS: A fórmula da ASSH para os dedos lesados mostrou que 18,33% tiveram a classificação do movimento "bom", 18,33%, "regular" e 63,34%, "pobre". Foram comparadas as médias das medidas em graus de todos os dedos entre si dentro de cada grupo, controle ou lesado, e as médias das medidas entre os grupos, encontrando-se um p-valor significante apenas entre os grupos controle e lesado. Não houve diferença estatística entre o TAM de cada dedo na mão lesada. CONCLUSÃO: Independente de quantos dedos tenham sofrido lesão tendinosa em uma mão, os dedos não lesados também terão suas ADMs ativas diminuídas no período logo após a retirada da imobilização.OBJECTIVE: To assess the range of motion (ROM in hands that underwent tendon repair in the flexor digitorum superficialis and flexor digitorum profundus muscles of the fingers, comparing the data between the fingers on the injured hand, and between the injured and uninjured hands. METHOD: Active goniometry was performed on 15 patients, making a total of 120 fingers (60 on injured hands and 60 on noninjured control hands. The patients

  15. Clean Hands Count

    Medline Plus

    Full Text Available ... has been rented. This feature is not available right now. Please try again later. Published on May ... 34 How The Clean Hands - Safe Hands System Works - Duration: 3:38. Clean Hands-Safe Hands 5, ...

  16. SURVEY OF BIOMETRIC SYSTEMS USING IRIS RECOGNITION

    OpenAIRE

    S.PON SANGEETHA; DR.M.KARNAN

    2014-01-01

    The security plays an important role in any type of organization in today’s life. Iris recognition is one of the leading automatic biometric systems in the area of security which is used to identify the individual person. Biometric systems include fingerprints, facial features, voice recognition, hand geometry, handwriting, the eye retina and the most secured one presented in this paper, the iris recognition. Biometric systems has become very famous in security systems because it is not possi...

  17. MOCA: A Low-Power, Low-Cost Motion Capture System Based on Integrated Accelerometers

    Directory of Open Access Journals (Sweden)

    Elisabetta Farella

    2007-01-01

    Full Text Available Human-computer interaction (HCI and virtual reality applications pose the challenge of enabling real-time interfaces for natural interaction. Gesture recognition based on body-mounted accelerometers has been proposed as a viable solution to translate patterns of movements that are associated with user commands, thus substituting point-and-click methods or other cumbersome input devices. On the other hand, cost and power constraints make the implementation of a natural and efficient interface suitable for consumer applications a critical task. Even though several gesture recognition solutions exist, their use in HCI context has been poorly characterized. For this reason, in this paper, we consider a low-cost/low-power wearable motion tracking system based on integrated accelerometers called motion capture with accelerometers (MOCA that we evaluated for navigation in virtual spaces. Recognition is based on a geometric algorithm that enables efficient and robust detection of rotational movements. Our objective is to demonstrate that such a low-cost and a low-power implementation is suitable for HCI applications. To this purpose, we characterized the system from both a quantitative point of view and a qualitative point of view. First, we performed static and dynamic assessment of movement recognition accuracy. Second, we evaluated the effectiveness of user experience using a 3D game application as a test bed.

  18. Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training.

    Science.gov (United States)

    Kutafina, Ekaterina; Laukamp, David; Bettermann, Ralf; Schroeder, Ulrik; Jonas, Stephan M

    2016-08-03

    In this paper, we propose a novel approach to eLearning that makes use of smart wearable sensors. Traditional eLearning supports the remote and mobile learning of mostly theoretical knowledge. Here we discuss the possibilities of eLearning to support the training of manual skills. We employ forearm armbands with inertial measurement units and surface electromyography sensors to detect and analyse the user's hand motions and evaluate their performance. Hand hygiene is chosen as the example activity, as it is a highly standardized manual task that is often not properly executed. The World Health Organization guidelines on hand hygiene are taken as a model of the optimal hygiene procedure, due to their algorithmic structure. Gesture recognition procedures based on artificial neural networks and hidden Markov modeling were developed, achieving recognition rates of 98 . 30 % ( ± 1 . 26 % ) for individual gestures. Our approach is shown to be promising for further research and application in the mobile eLearning of manual skills.

  19. Personal authentication through dorsal hand vein patterns

    Science.gov (United States)

    Hsu, Chih-Bin; Hao, Shu-Sheng; Lee, Jen-Chun

    2011-08-01

    Biometric identification is an emerging technology that can solve security problems in our networked society. A reliable and robust personal verification approach using dorsal hand vein patterns is proposed in this paper. The characteristic of the approach needs less computational and memory requirements and has a higher recognition accuracy. In our work, the near-infrared charge-coupled device (CCD) camera is adopted as an input device for capturing dorsal hand vein images, it has the advantages of the low-cost and noncontact imaging. In the proposed approach, two finger-peaks are automatically selected as the datum points to define the region of interest (ROI) in the dorsal hand vein images. The modified two-directional two-dimensional principal component analysis, which performs an alternate two-dimensional PCA (2DPCA) in the column direction of images in the 2DPCA subspace, is proposed to exploit the correlation of vein features inside the ROI between images. The major advantage of the proposed method is that it requires fewer coefficients for efficient dorsal hand vein image representation and recognition. The experimental results on our large dorsal hand vein database show that the presented schema achieves promising performance (false reject rate: 0.97% and false acceptance rate: 0.05%) and is feasible for dorsal hand vein recognition.

  20. Speaker Recognition

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti; Jørgensen, Kasper Winther

    2005-01-01

    Speaker recognition is basically divided into speaker identification and speaker verification. Verification is the task of automatically determining if a person really is the person he or she claims to be. This technology can be used as a biometric feature for verifying the identity of a person...

  1. Handwriting Moroccan regions recognition using Tifinagh character

    Directory of Open Access Journals (Sweden)

    B. El Kessab

    2015-09-01

    In this context we propose a data set for handwritten Tifinagh regions composed of 1600 image (100 Image for each region. The dataset can be used in one hand to test the efficiency of the Tifinagh region recognition system in extraction of characteristics significatives and the correct identification of each region in classification phase in the other hand.

  2. Recognition of Indian Sign Language in Live Video

    Science.gov (United States)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  3. Hand Surgery: Anesthesia

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Hand Surgery Anesthesia Email to a friend * required ...

  4. Biomechanical Reconstruction Using the Tacit Learning System: Intuitive Control of Prosthetic Hand Rotation.

    Science.gov (United States)

    Oyama, Shintaro; Shimoda, Shingo; Alnajjar, Fady S K; Iwatsuki, Katsuyuki; Hoshiyama, Minoru; Tanaka, Hirotaka; Hirata, Hitoshi

    2016-01-01

    Background: For mechanically reconstructing human biomechanical function, intuitive proportional control, and robustness to unexpected situations are required. Particularly, creating a functional hand prosthesis is a typical challenge in the reconstruction of lost biomechanical function. Nevertheless, currently available control algorithms are in the development phase. The most advanced algorithms for controlling multifunctional prosthesis are machine learning and pattern recognition of myoelectric signals. Despite the increase in computational speed, these methods cannot avoid the requirement of user consciousness and classified separation errors. "Tacit Learning System" is a simple but novel adaptive control strategy that can self-adapt its posture to environment changes. We introduced the strategy in the prosthesis rotation control to achieve compensatory reduction, as well as evaluated the system and its effects on the user. Methods: We conducted a non-randomized study involving eight prosthesis users to perform a bar relocation task with/without Tacit Learning System support. Hand piece and body motions were recorded continuously with goniometers, videos, and a motion-capture system. Findings: Reduction in the participants' upper extremity rotatory compensation motion was monitored during the relocation task in all participants. The estimated profile of total body energy consumption improved in five out of six participants. Interpretation: Our system rapidly accomplished nearly natural motion without unexpected errors. The Tacit Learning System not only adapts human motions but also enhances the human ability to adapt to the system quickly, while the system amplifies compensation generated by the residual limb. The concept can be extended to various situations for reconstructing lost functions that can be compensated.

  5. Non-linear methods for the quantification of cyclic motion

    OpenAIRE

    Quintana Duque, Juan Carlos

    2016-01-01

    Traditional methods of human motion analysis assume that fluctuations in cycles (e.g. gait motion) and repetitions (e.g. tennis shots) arise solely from noise. However, the fluctuations may have enough information to describe the properties of motion. Recently, the fluctuations in motion have been analysed based on the concepts of variability and stability, but they are not used uniformly. On the one hand, these concepts are often mixed in the existing literature, while on the other hand, the...

  6. Controller design for Robotic hand through Electroencephalogram

    OpenAIRE

    Pandelidis P.; Kiriazis N.; Orgianelis K.; Koulios N.

    2016-01-01

    - This paper deals with the designing, the construction and the control of a robotic hand via an electroencephalogram sensor. First a robotic device that is able to mimic a real human hand is constructed. A PID controller is designed in order to improve the performance of the robotic arm for grabbing objects. Furthermore, a novel design approach is presented for controlling the motion of the robotic arm using signals produced from an innovative electroencephalogram sensor that detects the con...

  7. Force and motion

    CERN Document Server

    Robertson, William C

    2002-01-01

    Intimidated by inertia? Frightened by forces? Mystified by Newton s law of motion? You re not alone and help is at hand. The stop Faking It! Series is perfect for science teachers, home-schoolers, parents wanting to help with homework all of you who need a jargon-free way to learn the background for teaching middle school physical science with confidence. With Bill Roberton as your friendly, able but somewhat irreverent guide, you will discover you CAN come to grips with the basics of force and motion. Combining easy-to-understand explanations with activities using commonly found equipment, this book will lead you through Newton s laws to the physics of space travel. The book is as entertaining as it is informative. Best of all, the author understands the needs of adults who want concrete examples, hands-on activities, clear language, diagrams and yes, a certain amount of empathy. Ideas For Use Newton's laws, and all of the other motion principles presented in this book, do a good job of helping us to underst...

  8. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  9. Sensing Movement: Microsensors for Body Motion Measurement

    Directory of Open Access Journals (Sweden)

    Hansong Zeng

    2011-01-01

    Full Text Available Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.

  10. Osteoarthritis of the Hand

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Osteoarthritis Email to a friend * required fields From * ...

  11. Hands in Systemic Disease

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is ... Hand Therapist? Media Find a Hand Surgeon Home Anatomy ... hands, being composed of many types of tissue, including blood vessels, nerves, skin and skin-related tissues, bones, and muscles/tendons/ligaments, may show changes that reflect a ...

  12. Denmark: HAND in HAND Policy Questionnaire

    DEFF Research Database (Denmark)

    Laursen, Hilmar Dyrborg; Nielsen, Birgitte Lund

    2018-01-01

    Som del af det internationale EU finansierede projekt Hand in Hand, der fokuserer på de såkaldte SEI-kompetencer (Social, Emotional, Intercultural), er dansk policy i relation til elevernes sociale, emotionelle og interkulturelle læring kortlagt i denne rapport. Der refereres bl.a. til "elevernes...

  13. Recognition of tennis serve performed by a digital player: comparison among polygon, shadow, and stick-figure models.

    Directory of Open Access Journals (Sweden)

    Hirofumi Ida

    Full Text Available The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster of one of the joint rotations (wrist, elbow, or shoulder. Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.

  14. Recognition of tennis serve performed by a digital player: comparison among polygon, shadow, and stick-figure models.

    Science.gov (United States)

    Ida, Hirofumi; Fukuhara, Kazunobu; Ishii, Motonobu

    2012-01-01

    The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure) were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster) of one of the joint rotations (wrist, elbow, or shoulder). Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.

  15. Operator Fractional Brownian Motion and Martingale Differences

    Directory of Open Access Journals (Sweden)

    Hongshuai Dai

    2014-01-01

    Full Text Available It is well known that martingale difference sequences are very useful in applications and theory. On the other hand, the operator fractional Brownian motion as an extension of the well-known fractional Brownian motion also plays an important role in both applications and theory. In this paper, we study the relation between them. We construct an approximation sequence of operator fractional Brownian motion based on a martingale difference sequence.

  16. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    Science.gov (United States)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  17. Perception of biological motion in visual agnosia

    Directory of Open Access Journals (Sweden)

    Elisabeth eHuberle

    2012-08-01

    Full Text Available Over the past twenty-five years, visual processing has been discussed in the context of the dual stream hypothesis consisting of a ventral (‘what' and a dorsal ('where' visual information processing pathway. Patients with brain damage of the ventral pathway typically present with signs of visual agnosia, the inability to identify and discriminate objects by visual exploration, but show normal perception of motion perception. A dissociation between the perception of biological motion and non-biological motion has been suggested: Perception of biological motion might be impaired when 'non-biological' motion perception is intact and vice versa. The impact of object recognition on the perception of biological motion remains unclear. We thus investigated this question in a patient with severe visual agnosia, who showed normal perception of non-biological motion. The data suggested that the patient's perception of biological motion remained largely intact. However, when tested with objects constructed of coherently moving dots (‘Shape-from-Motion’, recognition was severely impaired. The results are discussed in the context of possible mechanisms of biological motion perception.

  18. Development of five-finger robotic hand using master-slave control for hand-assisted laparoscopic surgery.

    Science.gov (United States)

    Yoshida, Koki; Yamada, Hiroshi; Kato, Ryu; Seki, Tatsuya; Yokoi, Hiroshi; Mukai, Masaya

    2016-08-01

    This study aims to develop a robotic hand as a substitute for a surgeon's hand in hand-assisted laparoscopic surgery (HALS). We determined the requirements for the proposed hand from a surgeon's motions in HALS. We identified four basic behaviors: "power grasp," "precision grasp," "open hand for exclusion," and "peace sign for extending peritoneum." The proposed hand had the minimum necessary DOFs for performing these behaviors, five fingers as in a human's hand, a palm that can be folded when a surgeon inserts the hand into the abdomen, and an arm for adjusting the hand's position. We evaluated the proposed hand based on a performance test and a physician's opinions, and we confirmed that it can grasp organs.

  19. Robotic hand project

    OpenAIRE

    Karaçizmeli, Cengiz; Çakır, Gökçe; Tükel, Dilek

    2014-01-01

    In this work, the mechatronic based robotic hand is controlled by the position data taken from the glove which has flex sensors mounted to capture finger bending of the human hand. The angular movement of human hand’s fingers are perceived and processed by a microcontroller, and the robotic hand is controlled by actuating servo motors. It has seen that robotic hand can simulate the movement of the human hand that put on the glove, during tests have done. This robotic hand can be used not only...

  20. Modulation of pathogen recognition by autophagy

    Directory of Open Access Journals (Sweden)

    Ji Eun eOh

    2012-03-01

    Full Text Available Autophagy is an ancient biological process for maintaining cellular homeostasis by degradation of long-lived cytosolic proteins and organelles. Recent studies demonstrated that autophagy is availed by immune cells to regulate innate immunity. On the one hand, cells exert direct effector function by degrading intracellular pathogens; on the other hand, autophagy modulates pathogen recognition and downstream signaling for innate immune responses. Pathogen recognition via pattern recognition receptors induces autophagy. The function of phagocytic cells is enhanced by recruitment of autophagy-related proteins. Moreover, autophagy acts as a delivery system for viral replication complexes to migrate to the endosomal compartments where virus sensing occurs. In another case, key molecules of the autophagic pathway have been found to negatively regulate immune signaling, thus preventing aberrant activation of cytokine production and consequent immune responses. In this review, we focus on the recent advances in the role of autophagy in pathogen recognition and modulation of innate immune responses.

  1. Real-time intelligent pattern recognition algorithm for surface EMG signals

    Directory of Open Access Journals (Sweden)

    Jahed Mehran

    2007-12-01

    Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal

  2. Clean Hands Count

    Medline Plus

    Full Text Available ... to promote or encourage adherence to CDC hand hygiene recommendations. It is a component of the Clean ... aims to address myths and misperceptions about hand hygiene and empower patients to play a role in ...

  3. Clean Hands Count

    Medline Plus

    Full Text Available ... intended to promote or encourage adherence to CDC hand hygiene recommendations. It is a component of the Clean ... also aims to address myths and misperceptions about hand hygiene and empower patients to play a role in ...

  4. Clean Hands Count

    Science.gov (United States)

    ... intended to promote or encourage adherence to CDC hand hygiene recommendations. It is a component of the Clean ... also aims to address myths and misperceptions about hand hygiene and empower patients to play a role in ...

  5. Wash Your Hands

    Science.gov (United States)

    ... hand sanitizers might not remove harmful chemicals like pesticides and heavy metals from hands. Be cautious when ... Health Promotion Materials Fact Sheets Podcasts Posters Stickers Videos Web Features Training & Education Our Partners Publications, Data & ...

  6. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish.

    Science.gov (United States)

    Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi

    2018-06-05

    Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.

  7. New frontiers in the rubber hand experiment: when a robotic hand becomes one's own.

    Science.gov (United States)

    Caspar, Emilie A; De Beir, Albert; Magalhaes De Saldanha Da Gama, Pedro A; Yernaux, Florence; Cleeremans, Axel; Vanderborght, Bram

    2015-09-01

    The rubber hand illusion is an experimental paradigm in which participants consider a fake hand to be part of their body. This paradigm has been used in many domains of psychology (i.e., research on pain, body ownership, agency) and is of clinical importance. The classic rubber hand paradigm nevertheless suffers from limitations, such as the absence of active motion or the reliance on approximate measurements, which makes strict experimental conditions difficult to obtain. Here, we report on the development of a novel technology-a robotic, user- and computer-controllable hand-that addresses many of the limitations associated with the classic rubber hand paradigm. Because participants can actively control the robotic hand, the device affords higher realism and authenticity. Our robotic hand has a comparatively low cost and opens up novel and innovative methods. In order to validate the robotic hand, we have carried out three experiments. The first two studies were based on previous research using the rubber hand, while the third was specific to the robotic hand. We measured both sense of agency and ownership. Overall, results show that participants experienced a "robotic hand illusion" in the baseline conditions. Furthermore, we also replicated previous results about agency and ownership.

  8. Hand hygiene strategies

    OpenAIRE

    Yazaji, Eskandar Alex

    2011-01-01

    Hand hygiene is one of the major players in preventing healthcare associated infections. However, healthcare workers compliance with hand hygiene continues to be a challenge. This article will address strategies to help improving hand hygiene compliance. Keywords: hand hygiene; healthcare associated infections; multidisciplinary program; system change; accountability; education; feedback(Published: 18 July 2011)Citation: Journal of Community Hospital Internal Medicine Perspectives 2011, 1: 72...

  9. About Hand Surgery

    Science.gov (United States)

    ... Find a hand surgeon near you. © 2009 American Society for Surgery of the Hand. Definition developed by ASSH Council. Other Links CME Mission Statement and Disclaimer Policies and Technical Requirements Exhibits and Partners ASSH 822 W. Washington Blvd. ... 2018 by American Society for Surgery of the Hand × Search Tips Tip ...

  10. Guideline Implementation: Hand Hygiene.

    Science.gov (United States)

    Goldberg, Judith L

    2017-02-01

    Performing proper hand hygiene and surgical hand antisepsis is essential to reducing the rates of health care-associated infections, including surgical site infections. The updated AORN "Guideline for hand hygiene" provides guidance on hand hygiene and surgical hand antisepsis, the wearing of fingernail polish and artificial nails, proper skin care to prevent dermatitis, the wearing of jewelry, hand hygiene product selection, and quality assurance and performance improvement considerations. This article focuses on key points of the guideline to help perioperative personnel make informed decisions about hand hygiene and surgical hand antisepsis. The key points address the necessity of keeping fingernails and skin healthy, not wearing jewelry on the hands or wrists in the perioperative area, properly performing hand hygiene and surgical hand antisepsis, and involving patients and visitors in hand hygiene initiatives. Perioperative RNs should review the complete guideline for additional information and for guidance when writing and updating policies and procedures. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  11. Robotic hand and fingers

    Science.gov (United States)

    Salisbury, Curt Michael; Dullea, Kevin J.

    2017-06-06

    Technologies pertaining to a robotic hand are described herein. The robotic hand includes one or more fingers releasably attached to a robotic hand frame. The fingers can abduct and adduct as well as flex and tense. The fingers are releasably attached to the frame by magnets that allow for the fingers to detach from the frame when excess force is applied to the fingers.

  12. Kinesthetic information disambiguates visual motion signals.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Rubber hand illusion affects joint angle perception.

    Directory of Open Access Journals (Sweden)

    Martin V Butz

    Full Text Available The Rubber Hand Illusion (RHI is a well-established experimental paradigm. It has been shown that the RHI can affect hand location estimates, arm and hand motion towards goals, the subjective visual appearance of the own hand, and the feeling of body ownership. Several studies also indicate that the peri-hand space is partially remapped around the rubber hand. Nonetheless, the question remains if and to what extent the RHI can affect the perception of other body parts. In this study we ask if the RHI can alter the perception of the elbow joint. Participants had to adjust an angular representation on a screen according to their proprioceptive perception of their own elbow joint angle. The results show that the RHI does indeed alter the elbow joint estimation, increasing the agreement with the position and orientation of the artificial hand. Thus, the results show that the brain does not only adjust the perception of the hand in body-relative space, but it also modifies the perception of other body parts. In conclusion, we propose that the brain continuously strives to maintain a consistent internal body image and that this image can be influenced by the available sensory information sources, which are mediated and mapped onto each other by means of a postural, kinematic body model.

  14. Enrollment Time as a Requirement for Biometric Hand Recognition Systems

    OpenAIRE

    Carvalho, João; Sá, Vítor; Tenreiro de Magalhães, Sérgio; Santos, Henrique

    2015-01-01

    Biometric systems are increasingly being used as a means for authentication to provide system security in modern technologies. The performance of a biometric system depends on the accuracy, the processing speed, the template size, and the time necessary for enrollment. While much research has focused on the first three factors, enrollment time has not received as much attention. In this work, we present the findings of our research focused upon studying user’s behavior when enrolling in...

  15. Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Aleš Procházka

    2018-05-01

    Full Text Available Multimodal signal analysis based on sophisticated sensors, efficient communicationsystems and fast parallel processing methods has a rapidly increasing range of multidisciplinaryapplications. The present paper is devoted to pattern recognition, machine learning, and the analysisof sleep stages in the detection of sleep disorders using polysomnography (PSG data, includingelectroencephalography (EEG, breathing (Flow, and electro-oculogram (EOG signals. The proposedmethod is based on the classification of selected features by a neural network system with sigmoidaland softmax transfer functions using Bayesian methods for the evaluation of the probabilities of theseparate classes. The application is devoted to the analysis of the sleep stages of 184 individualswith different diagnoses, using EEG and further PSG signals. Data analysis points to an averageincrease of the length of the Wake stage by 2.7% per 10 years and a decrease of the length of theRapid Eye Movement (REM stages by 0.8% per 10 years. The mean classification accuracy for givensets of records and single EEG and multimodal features is 88.7% ( standard deviation, STD: 2.1 and89.6% (STD:1.9, respectively. The proposed methods enable the use of adaptive learning processesfor the detection and classification of health disorders based on prior specialist experience andman–machine interaction.

  16. Mechanical design and control of a new myoelectric hand prosthesis

    NARCIS (Netherlands)

    Peerdeman, B.; Stramigioli, Stefano; Hekman, Edsko E.G.; Brouwer, Dannis Michel; Misra, Sarthak

    2011-01-01

    The development of modern, myoelectrically controlled hand prostheses can be difficult, due to the many requirements its mechanical design and control system need to fulfill [1]. The hand should be controllable with few input signals, while being able to perform a wide range of motions. It should be

  17. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video-to-video...... information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time...

  18. An intention driven hand functions task training robotic system.

    Science.gov (United States)

    Tong, K Y; Ho, S K; Pang, P K; Hu, X L; Tam, W K; Fung, K L; Wei, X J; Chen, P N; Chen, M

    2010-01-01

    A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.

  19. [Study on an Exoskeleton Hand Function Training Device].

    Science.gov (United States)

    Hu, Xin; Zhang, Ying; Li, Jicai; Yi, Jinhua; Yu, Hongliu; He, Rongrong

    2016-02-01

    Based on the structure and motion bionic principle of the normal adult fingers, biological characteristics of human hands were analyzed, and a wearable exoskeleton hand function training device for the rehabilitation of stroke patients or patients with hand trauma was designed. This device includes the exoskeleton mechanical structure and the electromyography (EMG) control system. With adjustable mechanism, the device was capable to fit different finger lengths, and by capturing the EMG of the users' contralateral limb, the motion state of the exoskeleton hand was controlled. Then driven by the device, the user's fingers conducting adduction/abduction rehabilitation training was carried out. Finally, the mechanical properties and training effect of the exoskeleton hand were verified through mechanism simulation and the experiments on the experimental prototype of the wearable exoskeleton hand function training device.

  20. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  1. Hand hygiene in the intensive care unit.

    Science.gov (United States)

    Tschudin-Sutter, Sarah; Pargger, Hans; Widmer, Andreas F

    2010-08-01

    Healthcare-associated infections affect 1.4 million patients at any time worldwide, as estimated by the World Health Organization. In intensive care units, the burden of healthcare-associated infections is greatly increased, causing additional morbidity and mortality. Multidrug-resistant pathogens are commonly involved in such infections and render effective treatment challenging. Proper hand hygiene is the single most important, simplest, and least expensive means of preventing healthcare-associated infections. In addition, it is equally important to stop transmission of multidrug-resistant pathogens. According to the Centers for Disease Control and Prevention and World Health Organization guidelines on hand hygiene in health care, alcohol-based handrub should be used as the preferred means for routine hand antisepsis. Alcohols have excellent in vitro activity against Gram-positive and Gram-negative bacteria, including multidrug-resistant pathogens, such as methicillin-resistant Staphylococcus aureus and vancomycin-resistant enterococci, Mycobacterium tuberculosis, a variety of fungi, and most viruses. Some pathogens, however, such as Clostridium difficile, Bacillus anthracis, and noroviruses, may require special hand hygiene measures. Failure to provide user friendliness of hand hygiene equipment and shortage of staff are predictors for noncompliance, especially in the intensive care unit setting. Therefore, practical approaches to promote hand hygiene in the intensive care unit include provision of a minimal number of handrub dispensers per bed, monitoring of compliance, and choice of the most attractive product. Lack of knowledge of guidelines for hand hygiene, lack of recognition of hand hygiene opportunities during patient care, and lack of awareness of the risk of cross-transmission of pathogens are barriers to good hand hygiene practices. Multidisciplinary programs to promote increased use of alcoholic handrub lead to an increased compliance of healthcare

  2. The Avocado Hand

    LENUS (Irish Health Repository)

    Rahmani, G

    2017-11-01

    Accidental self-inflicted knife injuries to digits are a common cause of tendon and nerve injury requiring hand surgery. There has been an apparent increase in avocado related hand injuries. Classically, the patients hold the avocado in their non-dominant hand while using a knife to cut\\/peel the fruit with their dominant hand. The mechanism of injury is usually a stabbing injury to the non-dominant hand as the knife slips past the stone, through the soft avocado fruit. Despite their apparent increased incidence, we could not find any cases in the literature which describe the “avocado hand”. We present a case of a 32-year-old woman who sustained a significant hand injury while preparing an avocado. She required exploration and repair of a digital nerve under regional anaesthesia and has since made a full recovery.

  3. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  4. Controller design for Robotic hand through Electroencephalogram

    Directory of Open Access Journals (Sweden)

    Pandelidis P.

    2016-01-01

    Full Text Available - This paper deals with the designing, the construction and the control of a robotic hand via an electroencephalogram sensor. First a robotic device that is able to mimic a real human hand is constructed. A PID controller is designed in order to improve the performance of the robotic arm for grabbing objects. Furthermore, a novel design approach is presented for controlling the motion of the robotic arm using signals produced from an innovative electroencephalogram sensor that detects the concentration of the brain

  5. Improved motion description for action classification

    Directory of Open Access Journals (Sweden)

    Mihir eJain

    2016-01-01

    Full Text Available Even though the importance of explicitly integrating motion characteristics in video descriptions has been demonstrated by several recent papers on action classification, our current work concludes that adequately decomposing visual motion into dominant and residual motions, i.e.: camera and scene motion, significantly improves action recognition algorithms. This holds true both for the extraction of the space-time trajectories and for computation of descriptors.We designed a new motion descriptor – the DCS descriptor – that captures additional information on local motion patterns enhancing results based on differential motion scalar quantities, divergence, curl and shear features. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. These findings are complementary to each other and they outperformed all previously reported results by a significant margin on three challenging datasets: Hollywood 2, HMDB51 and Olympic Sports as reported in (Jain et al. (2013. These results were further improved by (Oneata et al. (2013; Wang and Schmid (2013; Zhu et al. (2013 through the use of the Fisher vector encoding. We therefore also employ Fisher vector in this paper and we further enhance our approach by combining trajectories from both optical flow and compensated flow. We as well provide additional details of DCS descriptors, including visualization. For extending the evaluation, a novel dataset with 101 action classes, UCF101, was added.

  6. Coordination of hand shape.

    Science.gov (United States)

    Pesyna, Colin; Pundi, Krishna; Flanders, Martha

    2011-03-09

    The neural control of hand movement involves coordination of the sensory, motor, and memory systems. Recent studies have documented the motor coordinates for hand shape, but less is known about the corresponding patterns of somatosensory activity. To initiate this line of investigation, the present study characterized the sense of hand shape by evaluating the influence of differences in the amount of grasping or twisting force, and differences in forearm orientation. Human subjects were asked to use the left hand to report the perceived shape of the right hand. In the first experiment, six commonly grasped items were arranged on the table in front of the subject: bottle, doorknob, egg, notebook, carton, and pan. With eyes closed, subjects used the right hand to lightly touch, forcefully support, or imagine holding each object, while 15 joint angles were measured in each hand with a pair of wired gloves. The forces introduced by supporting or twisting did not influence the perceptual report of hand shape, but for most objects, the report was distorted in a consistent manner by differences in forearm orientation. Subjects appeared to adjust the intrinsic joint angles of the left hand, as well as the left wrist posture, so as to maintain the imagined object in its proper spatial orientation. In a second experiment, this result was largely replicated with unfamiliar objects. Thus, somatosensory and motor information appear to be coordinated in an object-based, spatial-coordinate system, sensitive to orientation relative to gravitational forces, but invariant to grasp forcefulness.

  7. Development of the bedridden person support system using hand gesture.

    Science.gov (United States)

    Ichimura, Kouhei; Magatani, Kazushige

    2015-08-01

    The purpose of this study is to support the bedridden and physically handicapped person who live independently. In this study, we developed Electric appliances control system that can be used on the bed. The subject can control Electric appliances using hand motion. Infrared sensors of a Kinect are used for the hand motion detection. Our developed system was tested with some normal subjects and results of the experiment were evaluated. In this experiment, all subjects laid on the bed and tried to control our system. As results, most of subjects were able to control our developed system perfectly. However, motion tracking of some subject's hand was reset forcibly. It was difficult for these subjects to make the system recognize his opened hand. From these results, we think if this problem will be improved our support system will be useful for the bedridden and physically handicapped persons.

  8. A biometric authentication model using hand gesture images.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok

    2013-10-30

    A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.

  9. An interactive VR system based on full-body tracking and gesture recognition

    Science.gov (United States)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  10. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  11. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  12. Unconstrained and contactless hand geometry biometrics.

    Science.gov (United States)

    de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier

    2011-01-01

    This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.

  13. Unconstrained and Contactless Hand Geometry Biometrics

    Directory of Open Access Journals (Sweden)

    Carmen Sánchez-Ávila

    2011-10-01

    Full Text Available This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM and k-Nearest Neighbour (k-NN. Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.

  14. Clean Hands Count

    Medline Plus

    Full Text Available ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,097 ... 089,212 views 4:50 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 26,032 views ...

  15. Mind the hand

    DEFF Research Database (Denmark)

    Davidsen, Jacob; Christiansen, Ellen Tove

    2014-01-01

    Apart from touching the screen, what is the role of the hands for children collaborating around touchscreens? Based on embodied and multimodal interaction analysis of 8- and 9-year old pairs collaborating around touchscreens, we conclude that children use their hands to constrain and control acce...

  16. Clean Hands Count

    Medline Plus

    Full Text Available ... 024 views 2:58 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 412,404 ... 2,805 views 3:13 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 25,574 views ...

  17. HAND INJURIES IN VOLLEYBALL

    NARCIS (Netherlands)

    BHAIRO, NH; NIJSTEN, MWN; VANDALEN, KC; TENDUIS, HJ

    We studied the long-term sequelae of hand injuries as a result of playing volleyball. In a retrospective study, 226 patients with injuries of the hand who were seen over a 5-year period at our Trauma Department, were investigated. Females accounted for 66 % of all injuries. The mean age was 26

  18. Clean Hands Count

    Medline Plus

    Full Text Available ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 412,760 ... 536,963 views 1:46 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 25,574 views ...

  19. Clean Hands Count

    Medline Plus

    Full Text Available ... today; no cure tomorrow - Duration: 3:10. World Health Organization 74,478 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,292 views 5:46 Hand Washing Technique - ...

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... 029 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 412,404 ... 081,511 views 4:50 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 25,194 views ...

  1. Clean Hands Count

    Medline Plus

    Full Text Available ... today; no cure tomorrow - Duration: 3:10. World Health Organization 75,362 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 412,404 views 5:46 Hand Washing Technique - ...

  2. Clean Hands Count

    Medline Plus

    Full Text Available ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,097 ... 086,746 views 4:50 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 25,802 views ...

  3. Clean Hands Count

    Medline Plus

    Full Text Available ... 453 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,702 ... 28,656 views 3:40 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 26,480 views ...

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... 362 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 412,404 ... 219,427 views 1:27 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 25,194 views ...

  5. Clean Hands Count

    Medline Plus

    Full Text Available ... 03. R Mayer 371,490 views 4:03 The psychological trick behind getting people to say yes - Duration: 8:06. PBS NewsHour 606,671 views 8:06 Should You Really Wash Your Hands? - Duration: 4:51. Gross Science 57,828 views 4:51 Healthcare Worker Hand ...

  6. Clean Hands Count

    Medline Plus

    Full Text Available ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,097 ... 28,656 views 3:40 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 26,032 views ...

  7. Clean Hands Count

    Medline Plus

    Full Text Available ... 5 Moments of Hand Hygiene - Duration: 1:53. Salem Health 13,972 views 1:53 Hand Hygiene ... Mode: Off History Help Loading... Loading... Loading... About Press Copyright Creators Advertise Developers +YouTube Terms Privacy Policy & ...

  8. "Puffy hand syndrome".

    Science.gov (United States)

    Chouk, Mickaël; Vidon, Claire; Deveza, Elise; Verhoeven, Frank; Pelletier, Fabien; Prati, Clément; Wendling, Daniel

    2017-01-01

    Intravenous drug addiction is responsible for many complications, especially cutaneous and infectious. There is a syndrome, rarely observed in rheumatology, resulting in "puffy hands": the puffy hand syndrome. We report two cases of this condition from our rheumatologic consultation. Our two patients had intravenous drug addiction. They presented with an edema of the hands, bilateral, painless, no pitting, occurring in one of our patient during heroin intoxication, and in the other 2 years after stopping injections. In our two patients, additional investigations (biological, radiological, ultrasound) were unremarkable, which helped us, in the context, to put the diagnosis of puffy hand syndrome. The pathophysiology, still unclear, is based in part on a lymphatic toxicity of drugs and their excipients. There is no etiological treatment but elastic compression by night has improved edema of the hands in one of our patients. Copyright © 2016 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  9. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    Science.gov (United States)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  10. Grasps Recognition and Evaluation of Stroke Patients for Supporting Rehabilitation Therapy

    Directory of Open Access Journals (Sweden)

    Beatriz Leon

    2014-01-01

    Full Text Available Stroke survivors often suffer impairments on their wrist and hand. Robot-mediated rehabilitation techniques have been proposed as a way to enhance conventional therapy, based on intensive repeated movements. Amongst the set of activities of daily living, grasping is one of the most recurrent. Our aim is to incorporate the detection of grasps in the machine-mediated rehabilitation framework so that they can be incorporated into interactive therapeutic games. In this study, we developed and tested a method based on support vector machines for recognizing various grasp postures wearing a passive exoskeleton for hand and wrist rehabilitation after stroke. The experiment was conducted with ten healthy subjects and eight stroke patients performing the grasping gestures. The method was tested in terms of accuracy and robustness with respect to intersubjects’ variability and differences between different grasps. Our results show reliable recognition while also indicating that the recognition accuracy can be used to assess the patients’ ability to consistently repeat the gestures. Additionally, a grasp quality measure was proposed to measure the capabilities of the stroke patients to perform grasp postures in a similar way than healthy people. These two measures can be potentially used as complementary measures to other upper limb motion tests.

  11. A new approach to hand-based authentication

    Science.gov (United States)

    Amayeh, G.; Bebis, G.; Erol, A.; Nicolescu, M.

    2007-04-01

    Hand-based authentication is a key biometric technology with a wide range of potential applications both in industry and government. Traditionally, hand-based authentication is performed by extracting information from the whole hand. To account for hand and finger motion, guidance pegs are employed to fix the position and orientation of the hand. In this paper, we consider a component-based approach to hand-based verification. Our objective is to investigate the discrimination power of different parts of the hand in order to develop a simpler, faster, and possibly more accurate and robust verification system. Specifically, we propose a new approach which decomposes the hand in different regions, corresponding to the fingers and the back of the palm, and performs verification using information from certain parts of the hand only. Our approach operates on 2D images acquired by placing the hand on a flat lighting table. Using a part-based representation of the hand allows the system to compensate for hand and finger motion without using any guidance pegs. To decompose the hand in different regions, we use a robust methodology based on morphological operators which does not require detecting any landmark points on the hand. To capture the geometry of the back of the palm and the fingers in suffcient detail, we employ high-order Zernike moments which are computed using an effcient methodology. The proposed approach has been evaluated on a database of 100 subjects with 10 images per subject, illustrating promising performance. Comparisons with related approaches using the whole hand for verification illustrate the superiority of the proposed approach. Moreover, qualitative comparisons with state-of-the-art approaches indicate that the proposed approach has comparable or better performance.

  12. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  13. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  14. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  15. Avoiding unfavorable results in postburn contracture hand

    Science.gov (United States)

    Bhattacharya, Sameek

    2013-01-01

    Deformities of the hands are a fairly common sequel of burn especially in the developing world. This is because of high incidence of burns, limited access to standard treatment and rehabilitation. The best outcome of a burnt hand is when deformities are prevented from developing. A good functional result is possible when due consideration is paid to hands during resuscitation, excisional surgery, reconstructive surgery and physiotherapy. The post-burns deformities of hand develop due direct thermal damage or secondary to intrinsic minus position due to oedema or vascular insufficiency. During the acute phase the concerns are, maintenance circulation minimize oedema prevent unphysiological positioning and wound closure with autogenous tissue as soon as possible. The rehabilitation program during the acute phase starts from day one and goes on till the hand has healed and has regained full range of motion. Full blown hand contractures are challenging to correct and become more difficult as time passes. Long-standing cases often land up with attenuation of extensor apparatus leading to swan neck and boutonniere deformity, muscle shortening and bony ankylosis. The major and most common pitfall after contracture release is relapse. The treatment protocol of contracture is solely directed towards countering this tendency. This article aims to guide a surgeon in obtaining optimal hand function and avoid pit falls at different stages of management of hand burns. The reasons of an unfavourable outcome of a burnt hand are possible lack of optimal care in the acute phase, while planning and performing reconstructive procedure and during aftercare and rehabilitation. PMID:24501479

  16. (In)Visible Hand(s)

    OpenAIRE

    Predrag Zima

    2007-01-01

    In this paper, the author discusses the regulatory role of the state and legal norms, in market economy, especially in so-called transition countries. Legal policy, and other questions of the state and free market economy are here closely connected, because the state must ensure with legal norms that economic processes are not interrupted: only the state can establish the legal basis for a market economy. The free market’s invisible hand is acting in questions such as: what is to be produced,...

  17. Prevention of hand eczema

    DEFF Research Database (Denmark)

    Fisker, Maja H; Ebbehøj, Niels E; Vejlstrup, Søren Grove

    2018-01-01

    Objective Occupational hand eczema has adverse health and socioeconomic impacts for the afflicted individuals and society. Prevention and treatment strategies are needed. This study aimed to assess the effectiveness of an educational intervention on sickness absence, quality of life and severity...... of hand eczema. Methods PREVEX (PreVention of EXema) is an individually randomized, parallel-group superiority trial investigating the pros and cons of one-time, 2-hour, group-based education in skin-protective behavior versus treatment as usual among patients with newly notified occupational hand eczema...

  18. Magnetotherapy in hand osteoarthritis: a pilot trial.

    Science.gov (United States)

    Kanat, Elvan; Alp, Alev; Yurtkuran, Merih

    2013-12-01

    To evaluate the effectiveness of magnetotherapy in the treatment of hand osteoarthritis (HO). In this randomized controlled single-blind follow-up study, patients with HO were randomly assigned into 2 groups (G1 and G2). The subjects in G1 (n=25) received 25Hz, 450 pulse/s, 5-80G, magnetotherapy of totally 10 days and 20 min/day combined with active range of motion/strengthening exercises for the hand. G2 (n=25) received sham-magnetotherapy for 20 min/day for the same duration combined with the same hand exercises. Outcome measures were pain and joint stiffness evaluation, handgrip and pinchgrip strength (HPS), Duruöz and Auscan Hand Osteoarthritis Indexes (DAOI) and Short Form-36 Health Questionnaire (SF-36) administered at baseline, immediately after treatment and at the follow up. When the groups were compared with each other, improvement observed in SF-36 Pain (p<0.001), SF-36 Social Function (p=0.030), SF-36 Vitality (p=0.002), SF-36 General Health (p=0.001), Pain at rest (p<0.001), Pain at motion (p<0.001), Joint stiffness (p<0.001), DAOI (p<0.001) were in favor of G1. Changes in pain, function and quality of life scores showed significant advantage in favor of the applied electromagnetic intervention in patients with HO. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Wheelchair control by head motion

    Directory of Open Access Journals (Sweden)

    Pajkanović Aleksandar

    2013-01-01

    Full Text Available Electric wheelchairs are designed to aid paraplegics. Unfortunately, these can not be used by persons with higher degree of impairment, such as quadriplegics, i.e. persons that, due to age or illness, can not move any of the body parts, except of the head. Medical devices designed to help them are very complicated, rare and expensive. In this paper a microcontroller system that enables standard electric wheelchair control by head motion is presented. The system comprises electronic and mechanic components. A novel head motion recognition technique based on accelerometer data processing is designed. The wheelchair joystick is controlled by the system’s mechanical actuator. The system can be used with several different types of standard electric wheelchairs. It is tested and verified through an experiment performed within this paper.

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... myths and misperceptions about hand hygiene and empower patients to play a role in their care by ... Copyright Creators Advertise Developers +YouTube Terms Privacy Policy & Safety Send feedback Test new features Loading... Working... Sign ...

  1. Clean Hands Count

    Medline Plus

    Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 65K ...

  2. Clean Hands Count

    Medline Plus

    Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 66K ...

  3. Tropical Diabetic Hand Syndrome

    African Journals Online (AJOL)

    2015 Annals of Medical and Health Sciences Research | Published by Wolters Kluwer - Medknow. 473. Introduction ... diabetes.[2,3] Tropical diabetic hand syndrome is a terminology .... the importance of seeking medical attention immediately.

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... now. Please try again later. Published on May 5, 2017 This video for healthcare providers is intended ... 36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,702 views 5:46 ...

  5. Clean Hands Count

    Medline Plus

    Full Text Available ... reminding healthcare providers to clean their hands. See: https://www.cdc.gov/handhygiene/campa... . Comments on this ... are allowed in accordance with our comment policy: http://www.cdc.gov/SocialMedia/Tools/... This video can ...

  6. Clean Hands Count

    Medline Plus

    Full Text Available ... empower patients to play a role in their care by asking or reminding healthcare providers to clean ... It's in your hands - prevent sepsis in health care' A 5 May 2018 advocacy message from WHO - ...

  7. Clean Hands Count

    Medline Plus

    Full Text Available ... why Close Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed ...

  8. Clean Hands Count

    Medline Plus

    Full Text Available ... has been rented. This feature is not available right now. Please try again later. Published on May ... Wash your Hands - it just makes sense. - Duration: 1:36. Seema Marwaha 404,414 views 1:36 ...

  9. Clean Hands Count

    Medline Plus

    Full Text Available ... Washing Video from CDC called "Put Your Hands Together" - Duration: 3:40. Patrick Boshell 27,834 views ... Policy & Safety Send feedback Test new features Loading... Working... Sign in to add this to Watch Later ...

  10. Clean Hands Count

    Medline Plus

    Full Text Available ... Published on May 5, 2017 This video for healthcare providers is intended to promote or encourage adherence ... role in their care by asking or reminding healthcare providers to clean their hands. See: https://www. ...

  11. Quadcopter Control Using Speech Recognition

    Science.gov (United States)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  12. Motion control report

    CERN Document Server

    2013-01-01

    Please note this is a short discount publication. In today's manufacturing environment, Motion Control plays a major role in virtually every project.The Motion Control Report provides a comprehensive overview of the technology of Motion Control:* Design Considerations* Technologies* Methods to Control Motion* Examples of Motion Control in Systems* A Detailed Vendors List

  13. Efficient Interaction Recognition through Positive Action Representation

    Directory of Open Access Journals (Sweden)

    Tao Hu

    2013-01-01

    Full Text Available This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI, including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority.

  14. Invariant Face recognition Using Infrared Images

    International Nuclear Information System (INIS)

    Zahran, E.G.

    2012-01-01

    Over the past few decades, face recognition has become a rapidly growing research topic due to the increasing demands in many applications of our daily life such as airport surveillance, personal identification in law enforcement, surveillance systems, information safety, securing financial transactions, and computer security. The objective of this thesis is to develop a face recognition system capable of recognizing persons with a high recognition capability, low processing time, and under different illumination conditions, and different facial expressions. The thesis presents a study for the performance of the face recognition system using two techniques; the Principal Component Analysis (PCA), and the Zernike Moments (ZM). The performance of the recognition system is evaluated according to several aspects including the recognition rate, and the processing time. Face recognition systems that use visual images are sensitive to variations in the lighting conditions and facial expressions. The performance of these systems may be degraded under poor illumination conditions or for subjects of various skin colors. Several solutions have been proposed to overcome these limitations. One of these solutions is to work in the Infrared (IR) spectrum. IR images have been suggested as an alternative source of information for detection and recognition of faces, when there is little or no control over lighting conditions. This arises from the fact that these images are formed due to thermal emissions from skin, which is an intrinsic property because these emissions depend on the distribution of blood vessels under the skin. On the other hand IR face recognition systems still have limitations with temperature variations and recognition of persons wearing eye glasses. In this thesis we will fuse IR images with visible images to enhance the performance of face recognition systems. Images are fused using the wavelet transform. Simulation results show that the fusion of visible and

  15. Face Detection and Recognition

    National Research Council Canada - National Science Library

    Jain, Anil K

    2004-01-01

    This report describes research efforts towards developing algorithms for a robust face recognition system to overcome many of the limitations found in existing two-dimensional facial recognition systems...

  16. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones.

    Science.gov (United States)

    Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E

    2016-08-20

    As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user's daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.

  17. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones

    Directory of Open Access Journals (Sweden)

    Hansong Guo

    2016-08-01

    Full Text Available As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.

  18. Graphical symbol recognition

    OpenAIRE

    K.C. , Santosh; Wendling , Laurent

    2015-01-01

    International audience; The chapter focuses on one of the key issues in document image processing i.e., graphical symbol recognition. Graphical symbol recognition is a sub-field of a larger research domain: pattern recognition. The chapter covers several approaches (i.e., statistical, structural and syntactic) and specially designed symbol recognition techniques inspired by real-world industrial problems. It, in general, contains research problems, state-of-the-art methods that convey basic s...

  19. Hands of early primates.

    Science.gov (United States)

    Boyer, Doug M; Yapuncich, Gabriel S; Chester, Stephen G B; Bloch, Jonathan I; Godinot, Marc

    2013-12-01

    Questions surrounding the origin and early evolution of primates continue to be the subject of debate. Though anatomy of the skull and inferred dietary shifts are often the focus, detailed studies of postcrania and inferred locomotor capabilities can also provide crucial data that advance understanding of transitions in early primate evolution. In particular, the hand skeleton includes characteristics thought to reflect foraging, locomotion, and posture. Here we review what is known about the early evolution of primate hands from a comparative perspective that incorporates data from the fossil record. Additionally, we provide new comparative data and documentation of skeletal morphology for Paleogene plesiadapiforms, notharctines, cercamoniines, adapines, and omomyiforms. Finally, we discuss implications of these data for understanding locomotor transitions during the origin and early evolutionary history of primates. Known plesiadapiform species cannot be differentiated from extant primates based on either intrinsic hand proportions or hand-to-body size proportions. Nonetheless, the presence of claws and a different metacarpophalangeal [corrected] joint form in plesiadapiforms indicate different grasping mechanics. Notharctines and cercamoniines have intrinsic hand proportions with extremely elongated proximal phalanges and digit rays relative to metacarpals, resembling tarsiers and galagos. But their hand-to-body size proportions are typical of many extant primates (unlike those of tarsiers, and possibly Teilhardina, which have extremely large hands). Non-adapine adapiforms and omomyids exhibit additional carpal features suggesting more limited dorsiflexion, greater ulnar deviation, and a more habitually divergent pollex than observed plesiadapiforms. Together, features differentiating adapiforms and omomyiforms from plesiadapiforms indicate increased reliance on vertical prehensile-clinging and grasp-leaping, possibly in combination with predatory behaviors in

  20. Augmented robotic device for EVA hand manoeuvres

    Science.gov (United States)

    Matheson, Eloise; Brooker, Graham

    2012-12-01

    During extravehicular activities (EVAs), pressurised space suits can lead to difficulties in performing hand manoeuvres and fatigue. This is often the cause of EVAs being terminated early, or taking longer to complete. Assistive robotic gloves can be used to augment the natural motion of a human hand, meaning work can be carried out more efficiently with less stress to the astronaut. Lightweight and low profile solutions must be found in order for the assistive robotic glove to be easily integrated with a space suit pressure garment. Pneumatic muscle actuators combined with force sensors are one such solution. These actuators are extremely light, yet can output high forces using pressurised gases as the actuation drive. Their movement is omnidirectional, so when combined with a flexible exoskeleton that itself provides a degree of freedom of movement, individual fingers can be controlled during flexion and extension. This setup allows actuators and other hardware to be stored remotely on the user's body, resulting in the least possible mass being supported by the hand. Two prototype gloves have been developed at the University of Sydney; prototype I using a fibreglass exoskeleton to provide flexion force, and prototype II using torsion springs to achieve the same result. The gloves have been designed to increase the ease of human movements, rather than to add unnatural ability to the hand. A state space control algorithm has been developed to ensure that human initiated movements are recognised, and calibration methods have been implemented to accommodate the different characteristics of each wearer's hands. For this calibration technique, it was necessary to take into account the natural tremors of the human hand which may have otherwise initiated unexpected control signals. Prototype I was able to actuate the user's hand in 1 degree of freedom (DOF) from full flexion to partial extension, and prototype II actuated a user's finger in 2 DOF with forces achieved

  1. Development of a prototype over-actuated biomimetic prosthetic hand.

    Directory of Open Access Journals (Sweden)

    Matthew R Williams

    Full Text Available The loss of a hand can greatly affect quality of life. A prosthetic device that can mimic normal hand function is very important to physical and mental recuperation after hand amputation, but the currently available prosthetics do not fully meet the needs of the amputee community. Most prosthetic hands are not dexterous enough to grasp a variety of shaped objects, and those that are tend to be heavy, leading to discomfort while wearing the device. In order to attempt to better simulate human hand function, a dexterous hand was developed that uses an over-actuated mechanism to form grasp shape using intrinsic joint mounted motors in addition to a finger tendon to produce large flexion force for a tight grip. This novel actuation method allows the hand to use small actuators for grip shape formation, and the tendon to produce high grip strength. The hand was capable of producing fingertip flexion force suitable for most activities of daily living. In addition, it was able to produce a range of grasp shapes with natural, independent finger motion, and appearance similar to that of a human hand. The hand also had a mass distribution more similar to a natural forearm and hand compared to contemporary prosthetics due to the more proximal location of the heavier components of the system. This paper describes the design of the hand and controller, as well as the test results.

  2. Automatic recognition of falls in gait-slip training: Harness load cell based criteria.

    Science.gov (United States)

    Yang, Feng; Pai, Yi-Chung

    2011-08-11

    Over-head-harness systems, equipped with load cell sensors, are essential to the participants' safety and to the outcome assessment in perturbation training. The purpose of this study was to first develop an automatic outcome recognition criterion among young adults for gait-slip training and then verify such criterion among older adults. Each of 39 young and 71 older subjects, all protected by safety harness, experienced 8 unannounced, repeated slips, while walking on a 7m walkway. Each trial was monitored with a motion capture system, bilateral ground reaction force (GRF), harness force, and video recording. The fall trials were first unambiguously indentified with careful visual inspection of all video records. The recoveries without balance loss (in which subjects' trailing foot landed anteriorly to the slipping foot) were also first fully recognized from motion and GRF analyses. These analyses then set the gold standard for the outcome recognition with load cell measurements. Logistic regression analyses based on young subjects' data revealed that the peak load cell force was the best predictor of falls (with 100% accuracy) at the threshold of 30% body weight. On the other hand, the peak moving average force of load cell across 1s period, was the best predictor (with 100% accuracy) separating recoveries with backward balance loss (in which the recovery step landed posterior to slipping foot) from harness assistance at the threshold of 4.5% body weight. These threshold values were fully verified using the data from older adults (100% accuracy in recognizing falls). Because of the increasing popularity in the perturbation training coupling with the protective over-head-harness system, this new criterion could have far reaching implications in automatic outcome recognition during the movement therapy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. AUTOMATIC RECOGNITION OF FALLS IN GAIT-SLIP: A HARNESS LOAD CELL BASED CRITERION

    Science.gov (United States)

    Yang, Feng; Pai, Yi-Chung

    2012-01-01

    Over-head-harness systems, equipped with load cell sensors, are essential to the participants’ safety and to the outcome assessment in perturbation training. The purpose of this study was to first develop an automatic outcome recognition criterion among young adults for gait-slip training and then verify such criterion among older adults. Each of 39 young and 71 older subjects, all protected by safety harness, experienced 8 unannounced, repeated slips, while walking on a 7-m walkway. Each trial was monitored with a motion capture system, bilateral ground reaction force (GRF), harness force and video recording. The fall trials were first unambiguously indentified with careful visual inspection of all video records. The recoveries without balance loss (in which subjects’ trailing foot landed anteriorly to the slipping foot) were also first fully recognized from motion and GRF analyses. These analyses then set the gold standard for the outcome recognition with load cell measurements. Logistic regression analyses based on young subjects’ data revealed that peak load cell force was the best predictor of falls (with 100% accuracy) at the threshold of 30% body weight. On the other hand, the peak moving average force of load cell across 1-s period, was the best predictor (with 100% accuracy) separating recoveries with backward balance loss (in which the recovery step landed posterior to slipping foot) from harness assistance at the threshold of 4.5% body weight. These threshold values were fully verified using the data from older adults (100% accuracy in recognizing falls). Because of the increasing popularity in the perturbation training coupling with the protective over-head-harness system, this new criterion could have far reaching implications in automatic outcome recognition during the movement therapy. PMID:21696744

  4. Hand eczema: An update

    Directory of Open Access Journals (Sweden)

    Chembolli Lakshmi

    2012-01-01

    Full Text Available Eczema, the commonest disorders afflicting the hands, is also the commonest occupational skin disease (OSD. In the dermatology outpatient departments, only the severe cases are diagnosed since patients rarely report with early hand dermatitis. Mild forms are picked up only during occupational screening. Hand eczema (HE can evolve into a chronic condition with persistent disease even after avoiding contact with the incriminated allergen / irritant. The important risk factors for hand eczema are atopy (especially the presence of dermatitis, wet work, and contact allergy. The higher prevalence in women as compared to men in most studies is related to environmental factors and is mainly applicable to younger women in their twenties. Preventive measures play a very important role in therapy as they enable the affected individuals to retain their employment and livelihood. This article reviews established preventive and therapeutic options and newer drugs like alitretinoin in hand eczema with a mention on the etiology and morphology. Identifying the etiological factors is of paramount importance as avoiding or minimizing these factors play an important role in treatment.

  5. On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor

    Directory of Open Access Journals (Sweden)

    Woosuk Kim

    2018-03-01

    Full Text Available In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.

  6. On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.

    Science.gov (United States)

    Kim, Woosuk; Kim, Myunggyu

    2018-03-19

    In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.

  7. Hand Hygiene: When and How

    Science.gov (United States)

    Hand Hygiene When and How August 2009 How to handrub? How to handwash? RUB HANDS FOR HAND HYGIENE! WASH HANDS WHEN VISIBLY SOILED Duration of the ... its use. When? YOUR 5 MOMENTS FOR HAND HYGIENE 1 BEFORETOUCHINGA PATIENT 2 B P ECFLOER R ...

  8. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  9. Recognition and Toleration

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2010-01-01

    Recognition and toleration are ways of relating to the diversity characteristic of multicultural societies. The article concerns the possible meanings of toleration and recognition, and the conflict that is often claimed to exist between these two approaches to diversity. Different forms...... or interpretations of recognition and toleration are considered, confusing and problematic uses of the terms are noted, and the compatibility of toleration and recognition is discussed. The article argues that there is a range of legitimate and importantly different conceptions of both toleration and recognition...

  10. Angle-of-arrival-based gesture recognition using ultrasonic multi-frequency signals

    KAUST Repository

    Chen, Hui

    2017-11-02

    Hand gestures are tools for conveying information, expressing emotion, interacting with electronic devices or even serving disabled people as a second language. A gesture can be recognized by capturing the movement of the hand, in real time, and classifying the collected data. Several commercial products such as Microsoft Kinect, Leap Motion Sensor, Synertial Gloves and HTC Vive have been released and new solutions have been proposed by researchers to handle this task. These systems are mainly based on optical measurements, inertial measurements, ultrasound signals and radio signals. This paper proposes an ultrasonic-based gesture recognition system using AOA (Angle of Arrival) information of ultrasonic signals emitted from a wearable ultrasound transducer. The 2-D angles of the moving hand are estimated using multi-frequency signals captured by a fixed receiver array. A simple redundant dictionary matching classifier is designed to recognize gestures representing the numbers from `0\\' to `9\\' and compared with a neural network classifier. Average classification accuracies of 95.5% and 94.4% are obtained, respectively, using the two classification methods.

  11. Left hand tactile agnosia after posterior callosal lesion.

    Science.gov (United States)

    Balsamo, Maddalena; Trojano, Luigi; Giamundo, Arcangelo; Grossi, Dario

    2008-09-01

    We report a patient with a hemorrhagic lesion encroaching upon the posterior third of the corpus callosum but sparing the splenium. She showed marked difficulties in recognizing objects and shapes perceived through her left hand, while she could appreciate elementary sensorial features of items tactually presented to the same hand flawlessly. This picture, corresponding to classical descriptions of unilateral associative tactile agnosia, was associated with finger agnosia of the left hand. This very unusual case report can be interpreted as an instance of disconnection syndrome, and allows a discussion of mechanisms involved in tactile object recognition.

  12. Design of a wearable hand exoskeleton for exercising flexion/extension of the fingers.

    Science.gov (United States)

    Jo, Inseong; Lee, Jeongsoo; Park, Yeongyu; Bae, Joonbum

    2017-07-01

    In this paper, design of a wearable hand exoskeleton system for exercising flexion/extension of the fingers, is proposed. The exoskeleton was designed with a simple and wearable structure to aid finger motions in 1 degree of freedom (DOF). A hand grasping experiment by fully-abled people was performed to investigate general hand flexion/extension motions and the polynomial curve of general hand motions was obtained. To customize the hand exoskeleton for the user, the polynomial curve was adjusted to the joint range of motion (ROM) of the user and the optimal design of the exoskeleton structure was obtained using the optimization algorithm. A prototype divided into two parts (one part for the thumb, the other for rest fingers) was actuated by only two linear motors for compact size and light weight.

  13. 8 CFR 1292.2 - Organizations qualified for recognition; requests for recognition; withdrawal of recognition...

    Science.gov (United States)

    2010-01-01

    ...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. 1292.2...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. (a) Qualifications of organizations. A non-profit religious, charitable, social service, or similar organization...

  14. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  15. Clean Hands Count

    Medline Plus

    Full Text Available ... CDC) 97,825 views 5:12 CDC Flu Education Video - Duration: 10:26. Nicole Shelton 213 views ... Infection Control Video - Duration: 20:55. Paramedical Services Education Page 4,735 views 20:55 Hand Washing ...

  16. Hand Eczema: Treatment options

    DEFF Research Database (Denmark)

    Lund, Tamara Theresia; Agner, Tove

    2017-01-01

    Hand eczema is a common disease, it affects young people, is often work-related, and the burden of the disease is significant for the individual as well as for society. Factors to be considered when choosing a treatment strategy are, among others, whether the eczema is acute or chronic, the sever...

  17. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 824 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 409,492 ...

  18. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 786 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,702 ...

  19. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 414 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 869 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  1. Wash Your Hands

    Centers for Disease Control (CDC) Podcasts

    2010-03-08

    This video shows kids how to properly wash their hands, one of the most important steps we can take to avoid getting sick and spreading germs to others.  Created: 3/8/2010 by Centers for Disease Control and Prevention (CDC).   Date Released: 3/8/2010.

  2. Clean Hands Count

    Medline Plus

    Full Text Available ... no cure tomorrow - Duration: 3:10. World Health Organization 75,585 views 3:10 Wash 'Em - Hand ... soap and water - Duration: 1:27. World Health Organization 224,180 views 1:27 The five moments ...

  3. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 460 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... action today; no cure tomorrow - Duration: 3:10. World Health Organization 75,362 views 3:10 Wash ' ... handwash? With soap and water - Duration: 1:27. World Health Organization 219,427 views 1:27 Hand ...

  5. Clean Hands Count

    Medline Plus

    Full Text Available ... action today; no cure tomorrow - Duration: 3:10. World Health Organization 74,478 views 3:10 Wash your Hands - ... handwash? With soap and water - Duration: 1:27. World Health Organization 215,487 views 1:27 Infection Control Video - ...

  6. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 741 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  7. Matching hand radiographs

    NARCIS (Netherlands)

    Kauffman, J.A.; Slump, Cornelis H.; Bernelot Moens, H.J.

    2005-01-01

    Biometric verification and identification methods of medical images can be used to find possible inconsistencies in patient records. Such methods may also be useful for forensic research. In this work we present a method for identifying patients by their hand radiographs. We use active appearance

  8. Clean Hands Count

    Medline Plus

    Full Text Available ... today; no cure tomorrow - Duration: 3:10. World Health Organization 72,885 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 views 5:46 'It's in your ...

  9. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 029 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,974 ...

  10. Clean Hands Count

    Medline Plus

    Full Text Available ... no cure tomorrow - Duration: 3:10. World Health Organization 78,256 views 3:10 Wash 'Em - Hand ... message from WHO - Duration: 10:07. World Health Organization 9,045 views 10:07 A very serious ...

  11. Hands-On Calculus

    Science.gov (United States)

    Sutherland, Melissa

    2006-01-01

    In this paper we discuss manipulatives and hands-on investigations for Calculus involving volume, arc length, and surface area to motivate and develop formulae which can then be verified using techniques of integration. Pre-service teachers in calculus courses using these activities experience a classroom in which active learning is encouraged and…

  12. Clean Hands Count

    Medline Plus

    Full Text Available ... action today; no cure tomorrow - Duration: 3:10. World Health Organization 78,256 views 3:10 Wash ... handwash? With soap and water - Duration: 1:27. World Health Organization 230,361 views 1:27 Hand ...

  13. Hands-on Humidity.

    Science.gov (United States)

    Pankiewicz, Philip R.

    1992-01-01

    Presents five hands-on activities that allow students to detect, measure, reduce, and eliminate moisture. Students make a humidity detector and a hygrometer, examine the effects of moisture on different substances, calculate the percent of water in a given food, and examine the absorption potential of different desiccants. (MDH)

  14. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 396 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  15. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 094 views 1:19 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,974 ...

  16. Clean Hands Count

    Medline Plus

    Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...

  17. Clean Hands Count

    Medline Plus

    Full Text Available ... today; no cure tomorrow - Duration: 3:10. World Health Organization 69,414 views 3:10 Hand Washing ... Video - Duration: 5:46. Thomas Jefferson University & Jefferson Health 408,436 views 5:46 83 videos Play ...

  18. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 319 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  19. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,097 ...

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... 14. Lake Health 14,415 views 3:14 Safety Demo: The Importance of Hand Washing - Duration: 2: ... Copyright Creators Advertise Developers +YouTube Terms Privacy Policy & Safety Send feedback Test new features Loading... Working... Sign ...

  1. Clean Hands Count

    Medline Plus

    Full Text Available ... action today; no cure tomorrow - Duration: 3:10. World Health Organization 72,319 views 3:10 Wash 'Em - Hand ... handwash? With soap and water - Duration: 1:27. World Health Organization 205,878 views 1:27 Germ Smart - Wash ...

  2. Hands On Earth Science.

    Science.gov (United States)

    Weisgarber, Sherry L.; Van Doren, Lisa; Hackathorn, Merrianne; Hannibal, Joseph T.; Hansgen, Richard

    This publication is a collection of 13 hands-on activities that focus on earth science-related activities and involve students in learning about growing crystals, tectonics, fossils, rock and minerals, modeling Ohio geology, geologic time, determining true north, and constructing scale-models of the Earth-moon system. Each activity contains…

  3. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 384 views 1:19 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 285 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  5. Clean Hands Count

    Medline Plus

    Full Text Available ... Gorin 243,451 views 2:57 Hand Hygiene Dance - Duration: 3:15. mohd hafiz 34,146 views ... Language: English Location: United States Restricted Mode: Off History Help Loading... Loading... Loading... About Press Copyright Creators ...

  6. Clean Hands Count

    Medline Plus

    Full Text Available ... YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. Working... Not now Try ... Wash your Hands - it just makes sense. - Duration: 1:36. Seema Marwaha 400,493 views 1:36 ...

  7. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 033 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  8. Clean Hands Count

    Medline Plus

    Full Text Available ... Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson Health 408,436 views 5: ... Prevention (CDC) 97,277 views 5:12 Loading more suggestions... Show more Language: English Location: United States ...

  9. Hands-On Hydrology

    Science.gov (United States)

    Mathews, Catherine E.; Monroe, Louise Nelson

    2004-01-01

    A professional school and university collaboration enables elementary students and their teachers to explore hydrology concepts and realize the beneficial functions of wetlands. Hands-on experiences involve young students in determining water quality at field sites after laying the groundwork with activities related to the hydrologic cycle,…

  10. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 043 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,292 ...

  11. Clean Hands Count

    Medline Plus

    Full Text Available ... News 581,131 views 18:49 Just Good Music 24/7 ● Classic Live Radio classics. 1,406 ... 611,013 views 1:46 Hand hygiene FULL music video - Duration: 2:33. AlfredHealthTV 26,798 views ...

  12. Clean Hands Count

    Medline Plus

    Full Text Available ... 52 Hand Sanitizers and Soaps Put to the Test - Duration: 2:26. ABC News 42,006 views ... Developers +YouTube Terms Privacy Policy & Safety Send feedback Test new features Loading... Working... Sign in to add ...

  13. Robot hands and extravehicular activity

    Science.gov (United States)

    Marcus, Beth

    1987-01-01

    Extravehicular activity (EVA) is crucial to the success of both current and future space operations. As space operations have evolved in complexity so has the demand placed on the EVA crewman. In addition, some NASA requirements for human capabilities at remote or hazardous sites were identified. One of the keys to performing useful EVA tasks is the ability to manipulate objects accurately, quickly and without early or excessive fatigue. The current suit employs a glove which enables the crewman to perform grasping tasks, use tools, turn switches, and perform other tasks for short periods of time. However, the glove's bulk and resistance to motion ultimately causes fatigue. Due to this limitation it may not be possible to meet the productivity requirements that will be placed on the EVA crewman of the future with the current or developmental Extravehicular Mobility Unit (EMU) hardware. In addition, this hardware will not meet the requirements for remote or hazardous operations. In an effort to develop ways for improving crew productivity, a contract was awarded to develop a prototype anthromorphic robotic hand (ARH) for use with an extravehicular space suit. The first step in this program was to perform a a design study which investigated the basic technology required for the development of an ARH to enhance crew performance and productivity. The design study phase of the contract and some additional development work is summarized.

  14. Motion in radiotherapy

    DEFF Research Database (Denmark)

    Korreman, Stine Sofia

    2012-01-01

    This review considers the management of motion in photon radiation therapy. An overview is given of magnitudes and variability of motion of various structures and organs, and how the motion affects images by producing artifacts and blurring. Imaging of motion is described, including 4DCT and 4DPE...

  15. More than two HANDs to tango.

    Science.gov (United States)

    Kolson, Dennis; Buch, Shilpa

    2013-12-01

    Developing a validated tool for the rapid and efficient assessment of cognitive functioning in HIV-infected patients in a typical outpatient clinical setting has been an unmet goal of HIV research since the recognition of the syndrome of HIV-associated dementia (HAD) nearly 20 years ago. In this issue of JNIP Cross et al. report the application of the International HIV Dementia Scale (IHDS) in a U.S.-based urban outpatient clinic to evaluate its utility as a substitute for the more time- and effort-demanding formalized testing criteria known as the Frascati criteria that was developed in 2007 to define the syndrome of HIV-associated neurocognitive disorders (HAND). In this study an unselected cohort of 507 individuals (68 % African American) that were assessed using the IHDS in a cross-sectional study revealed a 41 % prevalence of cognitive impairment (labeled ‘symptomatic HAND’) that was associated with African American race, older age, unemployment, education level, and depression. While the associations between cognitive impairment and older age, education, unemployment status and depression in HIV-infected patients are not surprising, the association with African American ancestry and cognitive impairment in the setting of HIV infection is a novel finding of this study. This commentary discusses several important issues raised by the study, including the pitfalls of assessing cognitive functioning with rapid screening tools, cognitive testing criteria, normative testing control groups, accounting for HAND co-morbidity factors, considerations for clinical trials assessing HAND, and selective population vulnerability to HAND.

  16. Optical Pattern Recognition

    Science.gov (United States)

    Yu, Francis T. S.; Jutamulia, Suganda

    2008-10-01

    Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.

  17. Hand VR Exergame for Occupational Health Care.

    Science.gov (United States)

    Ortiz, Saskia; Uribe-Quevedo, Alvaro; Kapralos, Bill

    2016-01-01

    The widespread use and ubiquity of mobile computing technologies such as smartphones, tablets, laptops and portable gaming consoles has led to an increase in musculoskeletal disorders due to overuse, bad posture, repetitive movements, fixed postures and physical de-conditioning caused by low muscular demands while using (and over-using) these devices. In this paper we present the development of a hand motion-based virtual reality-based exergame for occupational health purposes that allows the user to perform simple exercises using a cost-effective non-invasive motion capture device to help overcome and prevent some of the muskoloskeletal problems associated with the over-use of keyboards and mobile devices.

  18. Linguistic approach to object recognition by grasping

    Energy Technology Data Exchange (ETDEWEB)

    Marik, V

    1982-01-01

    A method for recognizing both the three-dimensional object shapes and their sizes by grasping them with an antropomorphic five-finger artificial hand is described. The hand is equipped with position sensing elements in the joints of the fingers and with a tactile transducer net on the palm surface. The linguistic method uses formal grammars and languages for the pattern description. The recognition is hierarchically arranged, every level being different from the others by a formal language which has been used. On every level the pattern description is generated and verified from the symmetrical and semantical points of view. The results of the implementation of the recognition of cones, pyramides, spheres, prisms and cylinders are presented and discussed. 8 references.

  19. A pneumatic muscle hand therapy device.

    Science.gov (United States)

    Koeneman, E J; Schultz, R S; Wolf, S L; Herring, D E; Koeneman, J B

    2004-01-01

    Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentortrade mark, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.

  20. Motion Transplantation Techniques: A Survey

    NARCIS (Netherlands)

    van Basten, Ben; Egges, Arjan

    2012-01-01

    During the past decade, researchers have developed several techniques for transplanting motions. These techniques transplant a partial auxiliary motion, possibly defined for a small set of degrees of freedom, on a base motion. Motion transplantation improves motion databases' expressiveness and

  1. SOPHIA: Soft Orthotic Physiotherapy Hand Interactive Aid

    Directory of Open Access Journals (Sweden)

    Alistair C. McConnell

    2017-06-01

    Full Text Available This work describes the design, fabrication, and initial testing of a Soft Orthotic Physiotherapy Hand Interactive Aid (SOPHIA for stroke rehabilitation. SOPHIA consists of (1 a soft robotic exoskeleton, (2 a microcontroller-based control system driven by a brain–machine interface (BMI, and (3 a sensorized glove for passive rehabilitation. In contrast to other rehabilitation devices, SOPHIA is the first modular prototype of a rehabilitation system that is capable of three tasks: aiding extension based assistive rehabilitation, monitoring patient exercises, and guiding passive rehabilitation. Our results show that this prototype of the device is capable of helping healthy subjects to open their hand. Finger extension is triggered by a command from the BMI, while using a variety of sensors to ensure a safe motion. All data gathered from the device will be used to guide further improvements to the prototype, aiming at developing specifications for the next generation device, which could be used in future clinical trials.

  2. Galileo and the Problems of Motion

    Science.gov (United States)

    Hooper, Wallace Edd

    Galileo's science of motion changed natural philosophy. His results initiated a broad human awakening to the intricate new world of physical order found in the midst of familiar operations of nature. His thinking was always based squarely on the academic traditions of the spiritual old world. He advanced physics by new standards of judgment drawn from mechanics and geometry, and disciplined observation of the world. My study first determines the order of composition of the earliest essays on motion and physics, ca. 1588 -1592, from internal evidence, and bibliographic evidence. There are clear signs of a Platonist critique of Aristotle, supported by Archimedes, in the Ten Section Version of On Motion, written ca. 1588, and probably the earliest of his treatises on motion or physics. He expanded upon his opening Platonic -Archimedean position by investigating the ideas of scholastic critics of Aristotle, including the Doctores Parisienses, found in his readings of the Jesuit Professors at the Collegio Romano. Their influences surfaced clearly in Galileo's Memoranda on Motion and the Dialogue on Motion, and in On Motion, which followed, ca. 1590-1592. At the end of his sojourn in Pisa, Galileo opened the road to the new physics by solving an important problem in the mechanics of Pappus, concerning motion along inclined planes. My study investigates why Galileo gave up attempts to establish a ratio between speed and weight, and why he began to seek the ratios of time and distance and speed, by 1602. It also reconstructs Galileo's development of the 1604 principle, seeking to outline its invention, elaboration, and abandonment. Then, I try to show that we have a record of Galileo's moment of recognition of the direct relation between the time of fall and the accumulated speed of motion--that great affinity between time and motion and the key to the new science of motion established before 1610. Evidence also ties the discovery of the time affinity directly to Galileo

  3. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  4. Sensor Based Motion Tracking and Recognition in Martial Arts Training

    OpenAIRE

    Agojo, Stephan

    2017-01-01

    In various martial arts, competitors are interested in quantifying and categorising techniques which are exercised during training. The implementation of embedded systems into training gear, especially a portable wireless body worn system, based on inertial sensors, facilitates the quantification and categorisation of forces and accelerations involved during the training of martial arts. The scope of this paper is to give a brief overview of contemporary technology and devices, describe key m...

  5. Arthritis of the hand - Rheumatoid

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Rheumatoid Arthritis Email to a friend * required fields ...

  6. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  7. Equation of motion for the axial gravitational superfield

    International Nuclear Information System (INIS)

    Ogievetsky, V.; Sokatchev, E.

    1980-01-01

    Transformation properties of the axial supergravitational field variants are investigated. The equation of motion for the axial gravitational superfield is derived by direct variation of the N = 1 supergravity action. The left-hand side of this equation is a component of the torsion tensor, and the right-hand side is the supercurrent. The question about the cosmological term in supergravity is discussed

  8. Control System Design of the YWZ Multi-Fingered Dexterous Hand

    Directory of Open Access Journals (Sweden)

    Wenzhen Yang

    2012-07-01

    Full Text Available The manipulation abilities of a multi-fingered dexterous hand, such as motion in real-time, flexibility, grasp stability etc., are largely dependent on its control system. This paper developed a control system for the YWZ dexterous hand, which had five fingers and twenty degrees of freedom (DOFs. All of the finger joints of the YWZ dexterous handwere active joints driven by twenty micro-stepper motors respectively. The main contribution of this paper was that we were able to use stepper motor control to actuate the hand's fingers, thus, increasing the hands feasibility. Based the actuators of the YWZ dexterous hand, we firstly developed an integrated circuit board (ICB, which was the communication hardware between the personal computer (PC and the YWZ dexterous hand. The ICB included a centre controller, twenty driver chips, a USB port and other electrical parts. Then, a communication procedure between the PC and the ICB was developed to send the control commands to actuate the YWZ dexterous hand. Experiment results showed that under this control system, the motion of the YWZ dexterous hand was real-time; both the motion accuracy and the motion stability of the YWZ dexterous hand were reliable. Compared with other types of actuators related to dexterous hands, such as pneumatic servo cylinder, DC servo motor, shape memory alloy etc., experiment results verified that the stepper motors as actuators for the dexterous handswere effective, economical, controllable and stable.

  9. Perception of biological motion from size-invariant body representations

    Directory of Open Access Journals (Sweden)

    Markus eLappe

    2015-03-01

    Full Text Available The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  10. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  11. Performance Comparison Between FEDERICA Hand and LARM Hand

    OpenAIRE

    Carbone, Giuseppe; Rossi, Cesare; Savino, Sergio

    2015-01-01

    This paper describes two robotic hands that have been\\ud developed at University Federico II of Naples and at the\\ud University of Cassino. FEDERICA Hand and LARM Hand\\ud are described in terms of design and operational features.\\ud In particular, careful attention is paid to the differences\\ud between the above-mentioned hands in terms of transmission\\ud systems. FEDERICA Hand uses tendons and pulleys\\ud to drive phalanxes, while LARM Hand uses cross four-bar\\ud linkages. Results of experime...

  12. Recognition of Handwriting from Electromyography

    Science.gov (United States)

    Linderman, Michael; Lebedev, Mikhail A.; Erlichman, Joseph S.

    2009-01-01

    Handwriting – one of the most important developments in human culture – is also a methodological tool in several scientific disciplines, most importantly handwriting recognition methods, graphology and medical diagnostics. Previous studies have relied largely on the analyses of handwritten traces or kinematic analysis of handwriting; whereas electromyographic (EMG) signals associated with handwriting have received little attention. Here we show for the first time, a method in which EMG signals generated by hand and forearm muscles during handwriting activity are reliably translated into both algorithm-generated handwriting traces and font characters using decoding algorithms. Our results demonstrate the feasibility of recreating handwriting solely from EMG signals – the finding that can be utilized in computer peripherals and myoelectric prosthetic devices. Moreover, this approach may provide a rapid and sensitive method for diagnosing a variety of neurogenerative diseases before other symptoms become clear. PMID:19707562

  13. Second-hand signals

    DEFF Research Database (Denmark)

    Bergenholtz, Carsten

    2014-01-01

    Studies of signaling theory have traditionally focused on the dyadic link between the sender and receiver of the signal. Within a science‐based perspective this framing has led scholars to investigate how patents and publications of firms function as signals. I explore another important type...... used by various agents in their search for and assessment of products and firms. I conclude by arguing how this second‐hand nature of signals goes beyond a simple dyadic focus on senders and receivers of signals, and thus elucidates the more complex interrelations of the various types of agents...

  14. Hand grip strength

    DEFF Research Database (Denmark)

    Frederiksen, Henrik; Gaist, David; Petersen, Hans Christian

    2002-01-01

    in life is a major problem in terms of prevalence, morbidity, functional limitations, and quality of life. It is therefore of interest to find a phenotype reflecting physical functioning which has a relatively high heritability and which can be measured in large samples. Hand grip strength is known......-55%). A powerful design to detect genes associated with a phenotype is obtained using the extreme discordant and concordant sib pairs, of whom 28 and 77 dizygotic twin pairs, respectively, were found in this study. Hence grip strength is a suitable phenotype for identifying genetic variants of importance to mid...

  15. The hand and wrist

    International Nuclear Information System (INIS)

    Wood, M.B.; Berquist, T.H.

    1985-01-01

    Trauma is the most common etiologic factor leading to disability in the hand and wrist. Judicious radiographic evaluation is required for accurate assessment in practically all but the most minor of such injuries. Frequently serial radiographic evaluation is essential for directing the course of treatment and for following the healing process. A meaningful radiographic evaluation requires a comprehensive knowledge of the normal radiographic anatomy, an overview of the spectrum of pathology, and an awareness of the usual mechanisms of injury, appropriate treatment options, and relevant array of complications

  16. Paradigms in object recognition

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.

    1999-09-01

    A broad range of approaches has been proposed and applied for the complex and rather difficult task of object recognition that involves the determination of object characteristics and object classification into one of many a priori object types. Our paper revises briefly the three main different paradigms in pattern recognition, namely Bayesian statistics, neural networks, and expert systems. (author)

  17. Infant Visual Recognition Memory

    Science.gov (United States)

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2004-01-01

    Visual recognition memory is a robust form of memory that is evident from early infancy, shows pronounced developmental change, and is influenced by many of the same factors that affect adult memory; it is surprisingly resistant to decay and interference. Infant visual recognition memory shows (a) modest reliability, (b) good discriminant…

  18. Recognition and Toleration

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2010-01-01

    Recognition and toleration are ways of relating to the diversity characteristic of multicultural societies. The article concerns the possible meanings of toleration and recognition, and the conflict that is often claimed to exist between these two approaches to diversity. Different forms or inter...

  19. Back to basics: hand hygiene and surgical hand antisepsis.

    Science.gov (United States)

    Spruce, Lisa

    2013-11-01

    Health care-associated infections (HAIs) are a significant issue in the United States and throughout the world, but following proper hand hygiene practices is the most effective and least expensive way to prevent HAIs. Hand hygiene is inexpensive and protects patients and health care personnel alike. The four general types of hand hygiene that should be performed in the perioperative environment are washing hands that are visibly soiled, hand hygiene using alcohol-based products, surgical hand scrubs, and surgical hand scrubs using an alcohol-based surgical hand rub product. Barriers to proper hand hygiene may include not thinking about it, forgetting, skin irritation, a lack of role models, or a lack of a safety culture. One strategy for improving hand hygiene practices is monitoring hand hygiene as part of a quality improvement project, but the most important aspect for perioperative team members is to set an example for other team members by following proper hand hygiene practices and reminding each other to perform hand hygiene. Copyright © 2013 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  20. Space Suit Glove Pressure Garment Metacarpal Joint and Robotic Hand Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Spacesuit glove pressure garments have been a design challenge for NASA since the inception of spacesuits. The human hand demands a complex range of motions, a close...

  1. Challenging ocular image recognition

    Science.gov (United States)

    Pauca, V. Paúl; Forkin, Michael; Xu, Xiao; Plemmons, Robert; Ross, Arun A.

    2011-06-01

    Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.

  2. Dynamic Features for Iris Recognition.

    Science.gov (United States)

    da Costa, R M; Gonzaga, A

    2012-08-01

    The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called "dynamic features (DFs)." This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be "fraud-proof," because these DFs can only be extracted from living irises.

  3. Attention and apparent motion.

    Science.gov (United States)

    Horowitz, T; Treisman, A

    1994-01-01

    Two dissociations between short- and long-range motion in visual search are reported. Previous research has shown parallel processing for short-range motion and apparently serial processing for long-range motion. This finding has been replicated and it has also been found that search for short-range targets can be impaired both by using bicontrast stimuli, and by prior adaptation to the target direction of motion. Neither factor impaired search in long-range motion displays. Adaptation actually facilitated search with long-range displays, which is attributed to response-level effects. A feature-integration account of apparent motion is proposed. In this theory, short-range motion depends on specialized motion feature detectors operating in parallel across the display, but subject to selective adaptation, whereas attention is needed to link successive elements when they appear at greater separations, or across opposite contrasts.

  4. Golf hand prosthesis performance of transradial amputees.

    Science.gov (United States)

    Carey, Stephanie L; Wernke, Matthew M; Lura, Derek J; Kahle, Jason T; Dubey, Rajiv V; Highsmith, M Jason

    2015-06-01

    Typical upper limb prostheses may limit sports participation; therefore, specialized terminal devices are often needed. The purpose of this study was to evaluate the ability of transradial amputees to play golf using a specialized terminal device. Club head speed, X-factor, and elbow motion of two individuals with transradial amputations using an Eagle Golf terminal device were compared to a non-amputee during a golf swing. Measurements were collected pre/post training with various stances and grips. Both prosthesis users preferred a right-handed stance initially; however, after training, one preferred a left-handed stance. The amputees had slower club head speeds and a lower X-factor compared to the non-amputee golfer, but increased their individual elbow motion on the prosthetic side after training. Amputees enjoyed using the device, and it may provide kinematic benefits indicated by the increase in elbow flexion on the prosthetic side. The transradial amputees were able to swing a golf club with sufficient repetition, form, and velocity to play golf recreationally. Increased elbow flexion on the prosthetic side suggests a potential benefit from using the Eagle Golf terminal device. Participating in recreational sports can increase amputees' health and quality of life. © The International Society for Prosthetics and Orthotics 2014.

  5. Human body contour data based activity recognition.

    Science.gov (United States)

    Myagmarbayar, Nergui; Yuki, Yoshida; Imamoglu, Nevrez; Gonzalez, Jose; Otake, Mihoko; Yu, Wenwei

    2013-01-01

    This research work is aimed to develop autonomous bio-monitoring mobile robots, which are capable of tracking and measuring patients' motions, recognizing the patients' behavior based on observation data, and providing calling for medical personnel in emergency situations in home environment. The robots to be developed will bring about cost-effective, safe and easier at-home rehabilitation to most motor-function impaired patients (MIPs). In our previous research, a full framework was established towards this research goal. In this research, we aimed at improving the human activity recognition by using contour data of the tracked human subject extracted from the depth images as the signal source, instead of the lower limb joint angle data used in the previous research, which are more likely to be affected by the motion of the robot and human subjects. Several geometric parameters, such as, the ratio of height to weight of the tracked human subject, and distance (pixels) between centroid points of upper and lower parts of human body, were calculated from the contour data, and used as the features for the activity recognition. A Hidden Markov Model (HMM) is employed to classify different human activities from the features. Experimental results showed that the human activity recognition could be achieved with a high correct rate.

  6. Objects in Motion

    Science.gov (United States)

    Damonte, Kathleen

    2004-01-01

    One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…

  7. Motion compensated digital tomosynthesis

    NARCIS (Netherlands)

    van der Reijden, Anneke; van Herk, Marcel; Sonke, Jan-Jakob

    2013-01-01

    Digital tomosynthesis (DTS) is a limited angle image reconstruction method for cone beam projections that offers patient surveillance capabilities during VMAT based SBRT delivery. Motion compensation (MC) has the potential to mitigate motion artifacts caused by respiratory motion, such as blur. The

  8. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand.

    Science.gov (United States)

    Kent, Benjamin A; Engeberg, Erik D

    2014-11-07

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques.

  9. Human-inspired feedback synergies for environmental interaction with a dexterous robotic hand

    International Nuclear Information System (INIS)

    Kent, Benjamin A; Engeberg, Erik D

    2014-01-01

    Effortless control of the human hand is mediated by the physical and neural couplings inherent in the structure of the hand. This concept was explored for environmental interaction tasks with the human hand, and a novel human-inspired feedback synergy (HFS) controller was developed for a robotic hand which synchronized position and force feedback signals to mimic observed human hand motions. This was achieved by first recording the finger joint motion profiles of human test subjects, where it was observed that the subjects would extend their fingers to maintain a natural hand posture when interacting with different surfaces. The resulting human joint angle data were used as inspiration to develop the HFS controller for the anthropomorphic robotic hand, which incorporated finger abduction and force feedback in the control laws for finger extension. Experimental results showed that by projecting a broader view of the tasks at hand to each specific joint, the HFS controller produced hand motion profiles that closely mimic the observed human responses and allowed the robotic manipulator to interact with the surfaces while maintaining a natural hand posture. Additionally, the HFS controller enabled the robotic hand to autonomously traverse vertical step discontinuities without prior knowledge of the environment, visual feedback, or traditional trajectory planning techniques. (paper)

  10. A survey on vision-based human action recognition

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human–computer interaction. The task is challenging due to variations in motion

  11. Recognition Memory for Movement in Photographs: A Developmental Study.

    Science.gov (United States)

    Futterweit, Lorelle R.; Beilin, Harry

    1994-01-01

    Investigated whether children's recognition memory for movement in photographs is distorted forward in the direction of implied motion. When asked whether the second photograph was the same as or different from the first, subjects made more errors for test photographs showing the action slightly forward in time, compared with slightly backward in…

  12. Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation

    Science.gov (United States)

    Argyriou, Paraskevi; Mohr, Christine; Kita, Sotaro

    2017-01-01

    Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes…

  13. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  14. On the Feasibility of Interoperable Schemes in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Miguel A. Ferrer

    2012-02-01

    Full Text Available Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.

  15. On the feasibility of interoperable schemes in hand biometrics.

    Science.gov (United States)

    Morales, Aythami; González, Ester; Ferrer, Miguel A

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.

  16. On the Feasibility of Interoperable Schemes in Hand Biometrics

    Science.gov (United States)

    Morales, Aythami; González, Ester; Ferrer, Miguel A.

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714

  17. Classification of hand eczema

    DEFF Research Database (Denmark)

    Agner, T; Aalto-Korte, K; Andersen, K E

    2015-01-01

    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were...... recruited from nine different tertiary referral centres. All patients underwent examination by specialists in dermatology and were checked using relevant allergy testing. Patients were classified into one of the six diagnostic subgroups of HE: allergic contact dermatitis, irritant contact dermatitis, atopic...... system investigated in the present study was useful, being able to give an appropriate main diagnosis for 89% of HE patients, and for another 7% when using two main diagnoses. The fact that more than half of the patients had one or more additional diagnoses illustrates that HE is a multifactorial disease....

  18. The Hand-Foot Skin Reaction and Quality of Life Questionnaire: An Assessment Tool for Oncology

    OpenAIRE

    Anderson, Roger T.; Keating, Karen N.; Doll, Helen A.; Camacho, Fabian

    2015-01-01

    This study describes the development and validation of a brief, patient self-reported questionnaire (the hand-foot skin reaction and quality of life questionnaire) supporting its suitability for use in clinical research to aid in early recognition of symptoms, to evaluate the effectiveness of agents for hand-foot skin reaction (HFSR) or hand-foot syndrome (HFS) treatment within clinical trials, and to evaluate the impact of these treatments on HFS/R-associated patients’ health-related quality...

  19. Wide Awake Hand Surgery.

    Science.gov (United States)

    Lied, Line; Borchgrevink, Grethe E; Finsen, Vilhjalmur

    2017-09-01

    "Wide awake hand surgery", where surgery is performed in local anaesthesia with adrenaline, without sedation or a tourniquet, has become widespread in some countries. It has a number of potential advantages and we wished to evaluate it among our patients. All 122 patients treated by this method during one year were evaluated by the surgeons and the patients on a numerical scale from 0 (best/least) to 10 (worst/most). Theatre time was compared to that recorded for a year when regional or general anaesthesia had been used. The patients' mean score for the general care they had received was 0.1 (SD 0.6), for pain during lidocaine injection 2.4 (SD 2.2), for pain during surgery 0.9 (SD 1.5), and for other discomfort during surgery 0.5 (SD 1.4). Eight reported that they would want general anaesthesia if they were to be operated again. The surgeons' mean evaluation of bleeding during surgery was 1.6 (SD 1.8), oedema during surgery 0.4 (SD 1.1), general disadvantages with the method 1.0 (SD 1.6) and general advantages 6.5 (SD 4.3). The estimation of advantages was 9.9 (DS 0.5) for tendon suture. 28 patients needed intra-operative additional anaesthesia. The proportion was lower among trained hand surgeons and fell significantly during the study period. Non-surgical theatre time was 46 (SD 15) minutes during the study period and 55 (SD 22) minutes during the regional/general period (p theatre.

  20. Rheumatoid arthritis and hand surgery

    DEFF Research Database (Denmark)

    Peretz, Anne Sofie Rosenborg; Madsen, Ole Rintek; Brogren, Elisabeth

    2017-01-01

    Rheumatoid arthritis results in characteristic deformities of the hand. Medical treatment has undergone a remarkable development. However, not all patients achieve remission or tolerate the treatment. Patients who suffer from deformities and persistent synovitis may be candidates for hand surgery...

  1. Rolling Shutter Motion Deblurring

    KAUST Repository

    Su, Shuochen

    2015-06-07

    Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undis torted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.

  2. Smoothing Motion Estimates for Radar Motion Compensation.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Simple motion models for complex motion environments are often not adequate for keeping radar data coherent. Eve n perfect motion samples appli ed to imperfect models may lead to interim calculations e xhibiting errors that lead to degraded processing results. Herein we discuss a specific i ssue involving calculating motion for groups of pulses, with measurements only available at pulse-group boundaries. - 4 - Acknowledgements This report was funded by General A tomics Aeronautical Systems, Inc. (GA-ASI) Mission Systems under Cooperative Re search and Development Agre ement (CRADA) SC08/01749 between Sandia National Laboratories and GA-ASI. General Atomics Aeronautical Systems, Inc. (GA-ASI), an affilia te of privately-held General Atomics, is a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, radars, and electro-optic and rel ated mission systems, includin g the Predator(r)/Gray Eagle(r)-series and Lynx(r) Multi-mode Radar.

  3. Curves from Motion, Motion from Curves

    Science.gov (United States)

    2000-01-01

    De linearum curvarum cum lineis rectis comparatione dissertatio geometrica - an appendix to a treatise by de Lalouv~re (this was the only publication... correct solution to the problem of motion in the gravity of a permeable rotating Earth, considered by Torricelli (see §3). If the Earth is a homogeneous...in 1686, which contains the correct solution as part of a remarkably comprehensive theory of orbital motions under centripetal forces. It is a

  4. Structural motion engineering

    CERN Document Server

    Connor, Jerome

    2014-01-01

    This innovative volume provides a systematic treatment of the basic concepts and computational procedures for structural motion design and engineering for civil installations. The authors illustrate the application of motion control to a wide spectrum of buildings through many examples. Topics covered include optimal stiffness distributions for building-type structures, the role of damping in controlling motion, tuned mass dampers, base isolation systems, linear control, and nonlinear control. The book's primary objective is the satisfaction of motion-related design requirements, such as restrictions on displacement and acceleration. The book is ideal for practicing engineers and graduate students. This book also: ·         Broadens practitioners' understanding of structural motion control, the enabling technology for motion-based design ·         Provides readers the tools to satisfy requirements of modern, ultra-high strength materials that lack corresponding stiffness, where the motion re...

  5. 8 CFR 292.2 - Organizations qualified for recognition; requests for recognition; withdrawal of recognition...

    Science.gov (United States)

    2010-01-01

    ...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. 292.2...; withdrawal of recognition; accreditation of representatives; roster. (a) Qualifications of organizations. A non-profit religious, charitable, social service, or similar organization established in the United...

  6. Management of Atopic Hand Dermatitis

    DEFF Research Database (Denmark)

    Halling-Overgaard, Anne-Sofie; Zachariae, Claus; Thyssen, Jacob P

    2017-01-01

    This article provides an overview of clinical aspects of hand eczema in patients with atopic dermatitis. Hand eczema can be a part of atopic dermatitis itself or a comorbidity, for example, as irritant or allergic contact dermatitis. When managing hand eczema, it is important to first categorize...

  7. Hand Washing: Do's and Dont's

    Science.gov (United States)

    ... hands frequently can help limit the transfer of bacteria, viruses and other microbes. Always wash your hands before: Preparing food or eating Treating wounds or caring for a sick person Inserting or removing contact lenses Always wash your hands after: Preparing food Using ...

  8. Hand aperture patterns in prehension.

    Science.gov (United States)

    Bongers, Raoul M; Zaal, Frank T J M; Jeannerod, Marc

    2012-06-01

    Although variations in the standard prehensile pattern can be found in the literature, these alternative patterns have never been studied systematically. This was the goal of the current paper. Ten participants picked up objects with a pincer grip. Objects (3, 5, or 7cm in diameter) were placed at 30, 60, 90, or 120cm from the hands' starting location. Usually the hand was opened gradually to a maximum immediately followed by hand closing, called the standard hand opening pattern. In the alternative opening patterns the hand opening was bumpy, or the hand aperture stayed at a plateau before closing started. Two participants in particular delayed the start of grasping with respect to start of reaching, with the delay time increasing with object distance. For larger object distances and smaller object sizes, the bumpy and plateau hand opening patterns were used more often. We tentatively concluded that the alternative hand opening patterns extended the hand opening phase, to arrive at the appropriate hand aperture at the appropriate time to close the hand for grasping the object. Variations in hand opening patterns deserve attention because this might lead to new insights into the coordination of reaching and grasping. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Dry Friction: Motions - Map and Characterization

    International Nuclear Information System (INIS)

    Motchongom-Tingue, M.; Kenfack-Jiotsa, A.; Tsobgni-Fozap, D.C.; Kofane, T.C.

    2009-12-01

    We consider a simple model of spring-mass block placed over a constant velocity v of a rolling plate. The map of the dynamic is presented in the (v,r) space where r accounts for the possible variation of the periodic shape profile of the rolling carpet. In order to characterize each type of motion, we found that evaluating the area of the phase space trajectories is more relevant than attempting on one hand, to solve analytically the asymptotic behavior or on the other hand, to obtain an equivalent of the entropy and the free energy. (author)

  10. Unimanual SNARC Effect: Hand Matters.

    Science.gov (United States)

    Riello, Marianna; Rusconi, Elena

    2011-01-01

    A structural representation of the hand embedding information about the identity and relative position of fingers is necessary to counting routines. It may also support associations between numbers and allocentric spatial codes that predictably interact with other known numerical spatial representations, such as the mental number line (MNL). In this study, 48 Western participants whose typical counting routine proceeded from thumb-to-little on both hands performed magnitude and parity binary judgments. Response keys were pressed either with the right index and middle fingers or with the left index and middle fingers in separate blocks. 24 participants responded with either hands in prone posture (i.e., palm down) and 24 participants responded with either hands in supine (i.e., palm up) posture. When hands were in prone posture, the counting direction of the left hand conflicted with the direction of the left-right MNL, whereas the counting direction of the right hand was consistent with it. When hands were in supine posture, the opposite was true. If systematic associations existed between relative number magnitude and an allocentric spatial representation of the finger series within each hand, as predicted on the basis of counting habits, interactions would be expected between hand posture and a unimanual version of the spatial-numerical association of response codes (SNARC) effect. Data revealed that with hands in prone posture a unimanual SNARC effect was present for the right hand, and with hands in supine posture a unimanual SNARC effect was present for the left hand. We propose that a posture-invariant body structural representation of the finger series provides a relevant frame of reference, a within-hand directional vector, that is associated to simple number processing. Such frame of reference can significantly interact with stimulus-response correspondence effects, like the SNARC, that have been typically attributed to the mapping of numbers on a left

  11. Unimanual SNARC Effect: Hand Matters

    Directory of Open Access Journals (Sweden)

    Marianna eRiello

    2011-12-01

    Full Text Available A structural representation of the hand embedding information about the identity and relative position of fingers is necessary to counting routines. It may also support associations between numbers and allocentric spatial codes that predictably interact with other known numerical spatial representations, such as the mental number line. In this study, 48 Western participants whose typical counting routine proceeded from thumb-to-little on both hands performed magnitude and parity binary judgments. Response keys were pressed either with the right index and middle fingers or with the left index and middle fingers in separate blocks. 24 participants responded with either hands in prone posture (i.e. palm down and 24 participants responded with either hands in supine (i.e. palm up posture. When hands were in prone posture, the counting direction of the left hand conflicted with the direction of the left-right mental number line, whereas the counting direction of the right hand was consistent with it. When hands were in supine posture, the opposite was true. If systematic associations existed between relative number magnitude and an allocentric spatial representation of the finger series within each hand, as predicted on the basis of counting habits, interactions would be expected between hand posture and a unimanual version of the Spatial-Numerical Association of Response Codes (SNARC effect. Data revealed that with hands in prone posture a unimanual SNARC effect was present for the right hand, and with hands in supine posture a unimanual SNARC effect was present for the left hand. We propose that a posture-invariant body structural representation of the finger series provides a relevant frame of reference, a within-hand directional vector, that is associated to simple number processing. Such frame of reference can significantly interact with stimulus-response correspondence effects that have been attributed to the mapping of numbers on a mental

  12. The right inhibition? Callosal correlates of hand performance in healthy children and adolescents callosal correlates of hand performance.

    Science.gov (United States)

    Kurth, Florian; Mayer, Emeran A; Toga, Arthur W; Thompson, Paul M; Luders, Eileen

    2013-09-01

    Numerous studies suggest that interhemispheric inhibition-relayed via the corpus callosum-plays an important role in unilateral hand motions. Interestingly, transcallosal inhibition appears to be indicative of a strong laterality effect, where generally the dominant hemisphere exerts inhibition on the nondominant one. These effects have been largely identified through functional studies in adult populations, but links between motor performance and callosal structure (especially during sensitive periods of neurodevelopment) remain largely unknown. We therefore investigated correlations between Purdue Pegboard performance (a test of motor function) and local callosal thickness in 170 right-handed children and adolescents (mean age: 11.5 ± 3.4 years; range, 6-17 years). Better task performance with the right (dominant) hand was associated with greater callosal thickness in isthmus and posterior midbody. Task performance using both hands yielded smaller and less significant correlations in the same regions, while task performance using the left (nondominant) hand showed no significant correlations with callosal thickness. There were no significant interactions with age and sex. These links between motor performance and callosal structure may constitute the neural correlate of interhemispheric inhibition, which is thought to be necessary for fast and complex unilateral motions and to be biased towards the dominant hand. Copyright © 2012 Wiley Periodicals, Inc., a Wiley company.

  13. Harmonization versus Mutual Recognition

    DEFF Research Database (Denmark)

    Jørgensen, Jan Guldager; Schröder, Philipp

    The present paper examines trade liberalization driven by the coordination of product standards. For oligopolistic firms situated in separate markets that are initially sheltered by national standards, mutual recognition of standards implies entry and reduced profits at home paired with the oppor......The present paper examines trade liberalization driven by the coordination of product standards. For oligopolistic firms situated in separate markets that are initially sheltered by national standards, mutual recognition of standards implies entry and reduced profits at home paired...... countries and three firms, where firms first lobby for the policy coordination regime (harmonization versus mutual recognition), and subsequently, in case of harmonization, the global standard is auctioned among the firms. We discuss welfare effects and conclude with policy implications. In particular......, harmonized standards may fail to harvest the full pro-competitive effects from trade liberalization compared to mutual recognition; moreover, the issue is most pronounced in markets featuring price competition....

  14. CASE Recognition Awards.

    Science.gov (United States)

    Currents, 1985

    1985-01-01

    A total of 294 schools, colleges, and universities received prizes in this year's CASE Recognition program. Awards were given in: public relations programs, student recruitment, marketing, program pulications, news writing, fund raising, radio programming, school periodicals, etc. (MLW)

  15. Forensic speaker recognition

    NARCIS (Netherlands)

    Meuwly, Didier

    2013-01-01

    The aim of forensic speaker recognition is to establish links between individuals and criminal activities, through audio speech recordings. This field is multidisciplinary, combining predominantly phonetics, linguistics, speech signal processing, and forensic statistics. On these bases, expert-based

  16. Unreal Interactive Puppet Game Development Using Leap Motion

    Science.gov (United States)

    Huang, An-Pin; Huang, Fay; Jhu, Jing-Siang

    2018-04-01

    This paper proposed a novel puppet play method utilizing recent technology. An interactive puppet game has been developed based on the theme of a famous Chinese classical novel. This project was implemented using Unreal Engine, which is a leading software of integrated tools for developers to design and build games. On the other hand, Leap Motion Controller is a sensor device for recognizing hand movements and gestures. It is commonly used in systems which require close-range finger-based user interaction. In order to manipulate puppets’ movements, the developed program employs the Leap Motion SDK, which provides a friendly way to add motion-controlled 3D hands to an Unreal game. The novelty of our project is to replace 3D model of rigged hands by two 3D humanoid rigged characters. The challenges of this task are two folds. First, the skeleton structure of a human hand and a humanoid character (i.e., puppets) are totally different. Making the puppets to follow the hand poses of the user and yet ensuring reasonable puppets’ movements has not been discussed in the literatures nor in the developer forums. Second, there are only a limited number of built-in recognizable hand gestures. More recognizable hand gestures need to be created for the interactive game. This paper reports the proposed solutions to these challenges.

  17. The Recognition Of Fatigue

    DEFF Research Database (Denmark)

    Elsass, Peter; Jensen, Bodil; Mørup, Rikke

    2007-01-01

    Elsass P., Jensen B., Morup R., Thogersen M.H. (2007). The Recognition Of Fatigue: A qualitative study of life-stories from rehabilitation clients. International Journal of Psychosocial Rehabilitation. 11 (2), 75-87......Elsass P., Jensen B., Morup R., Thogersen M.H. (2007). The Recognition Of Fatigue: A qualitative study of life-stories from rehabilitation clients. International Journal of Psychosocial Rehabilitation. 11 (2), 75-87...

  18. Evaluating music emotion recognition

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A fundamental problem with nearly all work in music genre recognition (MGR)is that evaluation lacks validity with respect to the principal goals of MGR. This problem also occurs in the evaluation of music emotion recognition (MER). Standard approaches to evaluation, though easy to implement, do...... not reliably differentiate between recognizing genre or emotion from music, or by virtue of confounding factors in signals (e.g., equalization). We demonstrate such problems for evaluating an MER system, and conclude with recommendations....

  19. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  20. Hand Rehabilitation Learning System With an Exoskeleton Robotic Glove.

    Science.gov (United States)

    Ma, Zhou; Ben-Tzvi, Pinhas; Danoff, Jerome

    2016-12-01

    This paper presents a hand rehabilitation learning system, the SAFE Glove, a device that can be utilized to enhance the rehabilitation of subjects with disabilities. This system is able to learn fingertip motion and force for grasping different objects and then record and analyze the common movements of hand function including grip and release patterns. The glove is then able to reproduce these movement patterns in playback fashion to assist a weakened hand to accomplish these movements, or to modulate the assistive level based on the user's or therapist's intent for the purpose of hand rehabilitation therapy. Preliminary data have been collected from healthy hands. To demonstrate the glove's ability to manipulate the hand, the glove has been fitted on a wooden hand and the grasping of various objects was performed. To further prove that hands can be safely driven by this haptic mechanism, force sensor readings placed between each finger and the mechanism are plotted. These experimental results demonstrate the potential of the proposed system in rehabilitation therapy.

  1. Arabic sign language recognition based on HOG descriptor

    Science.gov (United States)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  2. Why recognition is rational

    Directory of Open Access Journals (Sweden)

    Clintin P. Davis-Stober

    2010-07-01

    Full Text Available The Recognition Heuristic (Gigerenzer and Goldstein, 1996; Goldstein and Gigerenzer, 2002 makes the counter-intuitive prediction that a decision maker utilizing less information may do as well as, or outperform, an idealized decision maker utilizing more information. We lay a theoretical foundation for the use of single-variable heuristics such as the Recognition Heuristic as an optimal decision strategy within a linear modeling framework. We identify conditions under which over-weighting a single predictor is a mini-max strategy among a class of a priori chosen weights based on decision heuristics with respect to a measure of statistical lack of fit we call ``risk''. These strategies, in turn, outperform standard multiple regression as long as the amount of data available is limited. We also show that, under related conditions, weighting only one variable and ignoring all others produces the same risk as ignoring the single variable and weighting all others. This approach has the advantage of generalizing beyond the original environment of the Recognition Heuristic to situations with more than two choice options, binary or continuous representations of recognition, and to other single variable heuristics. We analyze the structure of data used in some prior recognition tasks and find that it matches the sufficient conditions for optimality in our results. Rather than being a poor or adequate substitute for a compensatory model, the Recognition Heuristic closely approximates an optimal strategy when a decision maker has finite data about the world.

  3. Robotically enhanced rubber hand illusion.

    Science.gov (United States)

    Arata, Jumpei; Hattori, Masashi; Ichikawa, Shohei; Sakaguchi, Masamichi

    2014-01-01

    The rubber hand illusion is a well-known multisensory illusion. In brief, watching a rubber hand being stroked by a paintbrush while one's own unseen hand is synchronously stroked causes the rubber hand to be attributed to one's own body and to "feel like it's my hand." The rubber hand illusion is thought to be triggered by the synchronized tactile stimulation of both the subject's hand and the fake hand. To extend the conventional rubber hand illusion, we introduce robotic technology in the form of a master-slave telemanipulator. The developed one degree-of-freedom master-slave system consists of an exoskeleton master equipped with an optical encoder that is worn on the subject's index finger and a motor-actuated index finger on the rubber hand, which allows the subject to perform unilateral telemanipulation. The moving rubber hand illusion has been studied by several researchers in the past with mechanically connected rigs between the subject's body and the fake limb. The robotic instruments let us investigate the moving rubber hand illusion with less constraints, thus behaving closer to the classic rubber hand illusion. In addition, the temporal delay between the body and the fake limb can be precisely manipulated. The experimental results revealed that the robotic instruments significantly enhance the rubber hand illusion. The time delay is significantly correlated with the effect of the multisensory illusion, and the effect significantly decreased at time delays over 100 ms. These findings can potentially contribute to the investigations of neural mechanisms in the field of neuroscience and of master-slave systems in the field of robotics.

  4. The thermodynamic cycle of an entropy-driven stepper motor walking hand-over-hand

    International Nuclear Information System (INIS)

    Zabicki, Michal; Ebeling, Werner; Gudowska-Nowak, Ewa

    2010-01-01

    Graphical abstract: We develop a new model of an entropy-driven stepper motor walking hand-over-hand, coupled to the energy reservoir of ATP. - Abstract: We develop a model of a kinesin motor based on an entropy-driven spring between the two heads of the stepper. The stepper is coupled to the energy depot which is reservoir of ATP. A Langevin equation for the motion of the two legs in a ratchet potential is analyzed by performing numerical simulations. It is documented that the model motor is able to work against a load force with an efficiency of about 10-30%. At a critical load force the motor stops to operate.

  5. Automatic Video-based Analysis of Human Motion

    DEFF Research Database (Denmark)

    Fihl, Preben

    The human motion contains valuable information in many situations and people frequently perform an unconscious analysis of the motion of other people to understand their actions, intentions, and state of mind. An automatic analysis of human motion will facilitate many applications and thus has...... received great interest from both industry and research communities. The focus of this thesis is on video-based analysis of human motion and the thesis presents work within three overall topics, namely foreground segmentation, action recognition, and human pose estimation. Foreground segmentation is often...... the first important step in the analysis of human motion. By separating foreground from background the subsequent analysis can be focused and efficient. This thesis presents a robust background subtraction method that can be initialized with foreground objects in the scene and is capable of handling...

  6. Hand-related physical function in rheumatic hand conditions

    DEFF Research Database (Denmark)

    Klokker, Louise; Terwee, Caroline; Wæhrens, Eva Elisabet Ejlersen

    2016-01-01

    INTRODUCTION: There is no consensus about what constitutes the most appropriate patient-reported outcome measurement (PROM) instrument for measuring physical function in patients with rheumatic hand conditions. Existing instruments lack psychometric testing and vary in feasibility...... and their psychometric qualities. We aim to develop a PROM instrument to assess hand-related physical function in rheumatic hand conditions. METHODS AND ANALYSIS: We will perform a systematic search to identify existing PROMs to rheumatic hand conditions, and select items relevant for hand-related physical function...... as well as those items from the Patient Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank that are relevant to patients with rheumatic hand conditions. Selection will be based on consensus among reviewers. Content validity of selected items will be established...

  7. Hand-related physical function in rheumatic hand conditions

    DEFF Research Database (Denmark)

    Klokker, Louise; Terwee, Caroline B; Wæhrens, Eva Ejlersen

    2016-01-01

    as well as those items from the Patient Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank that are relevant to patients with rheumatic hand conditions. Selection will be based on consensus among reviewers. Content validity of selected items will be established......INTRODUCTION: There is no consensus about what constitutes the most appropriate patient-reported outcome measurement (PROM) instrument for measuring physical function in patients with rheumatic hand conditions. Existing instruments lack psychometric testing and vary in feasibility...... and their psychometric qualities. We aim to develop a PROM instrument to assess hand-related physical function in rheumatic hand conditions. METHODS AND ANALYSIS: We will perform a systematic search to identify existing PROMs to rheumatic hand conditions, and select items relevant for hand-related physical function...

  8. Fusion of optical flow based motion pattern analysis and silhouette classification for person tracking and detection

    NARCIS (Netherlands)

    Tangelder, J.W.H.; Lebert, E.; Burghouts, G.J.; Zon, K. van; Den Uyl, M.J.

    2014-01-01

    This paper presents a novel approach to detect persons in video by combining optical flow based motion analysis and silhouette based recognition. A new fast optical flow computation method is described, and its application in a motion based analysis framework unifying human tracking and detection is

  9. Recognition and Synthesis of Human Movements by Parametric HMMs

    DEFF Research Database (Denmark)

    Herzog, Dennis; Krüger, Volker

    2009-01-01

    The representation of human movements for recognition and synthesis is important in many application fields such as: surveillance, human-computer interaction, motion capture, and humanoid robots. Hidden Markov models (HMMs) are a common statistical framework in this context, since...... on the recognition and synthesis of human arm movements. Furthermore, we will show in various experiments the use of PHMMs for the control of a humanoid robot by synthesizing movements for relocating objects at arbitrary positions. In vision-based interaction experiments, PHMM are used for the recognition...... of pointing movements, where the recognized parameterization conveys to a robot the important information which object to relocate and where to put it. Finally, we evaluate the accuracy of recognition and synthesis for pointing and grasping arm movements and discuss that the precision of the synthesis...

  10. HENRY'S "HAND OF GOD"

    Directory of Open Access Journals (Sweden)

    Željko Kaluđerović

    2014-04-01

    Full Text Available In this paper the author discusses the views and statements of the French football player Thierry Henry he gave after his illegal play during the playoff match between France and the Republic of Ireland to claim one of the final spots in the World Cup 2010 in South Africa. First, by controlling the ball with his hand before passing it on for the goal Henry has shown disregard for the constitutive rules of football. Then, by stating that he is "not a referee" he demonstrated that for some players rules are not inherent to football and that they can be relativized, given that for them winning is the goal of the highest ontological status. Furthermore, he has rejected the rules of sportsmanship, thus expressing his opinion that the opponents are just obstacles which have to be removed in order to achieve your goals. Henry's action has disrupted major moral values, such as justice, honesty, responsibility and beneficence. The rules of fair play have totally been ignored both in Henry's action and in the Football Association of France's unwillingness to comment on whether a replay should take place. They have ignored one of the basic principles stated in the "Declaration of the International Fair Play Committee", according to which, fair play is much more than playing to the rules of the game; it's about the attitude of the sportsperson. It's about respecting your opponent and preserving his or her physical and psychological integrity. Finally, the author believes that the rules, moral values and fair play in football are required for this game to become actually possible to play

  11. HENRY'S "HAND OF GOD"

    Directory of Open Access Journals (Sweden)

    Željko Kaluđerović

    2014-04-01

    Full Text Available In this paper the author discusses the views and statements of the French football player Thierry Henry he gave after his illegal play during the playoff match between France and the Republic of Ireland to claim one of the final spots in the World Cup 2010 in South Africa. First, by controlling the ball with his hand before passing it on for the goal Henry has shown disregard for the constitutive rules of football. Then, by stating that he is "not a referee" he demonstrated that for some players rules are not inherent to football and that they can be relativized, given that for them winning is the goal of the highest ontological status. Furthermore, he has rejected the rules of sportsmanship, thus expressing his opinion that the opponents are just obstacles which have to be removed in order to achieve your goals. Henry's action has disrupted major moral values, such as justice, honesty, responsibility and beneficence. The rules of fair play have totally been ignored both in Henry's action and in the Football Association of France's unwillingness to comment on whether a replay should take place. They have ignored one of the basic principles stated in the "Declaration of the International Fair Play Committee", according to which, fair play is much more than playing to the rules of the game; it's about the attitude of the sportsperson. It's about respecting your opponent and preserving his or her physical and psychological integrity. Finally, the author believes that the rules, moral values and fair play in football are required for this game to become actually possible to play.

  12. Hip strength and range of motion

    DEFF Research Database (Denmark)

    Mosler, Andrea B.; Crossley, Kay M.; Thorborg, Kristian

    2017-01-01

    Objectives To determine the normal profiles for hip strength and range of motion (ROM) in a professional football league in Qatar, and examine the effect of leg dominance, age, past history of injury, and ethnicity on these profiles. Design Cross-sectional cohort study. Methods Participants...... values are documented for hip strength and range of motion that can be used as reference profiles in the clinical assessment, screening, and management of professional football players. Leg dominance, recent past injury history and ethnicity do not need to be accounted for when using these profiles...... included 394 asymptomatic, male professional football players, aged 18–40 years. Strength was measured using a hand held dynamometer with an eccentric test in side-lying for hip adduction and abduction, and the squeeze test in supine with 45° hip flexion. Range of motion measures included: hip internal...

  13. Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field

    Directory of Open Access Journals (Sweden)

    Yingmin Li

    2010-01-01

    Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.

  14. Robust 3D Face Recognition in the Presence of Realistic Occlusions

    NARCIS (Netherlands)

    Alyuz, Nese; Gökberk, B.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Akarun, Lale

    2012-01-01

    Facial occlusions pose significant problems for automatic face recognition systems. In this work, we propose a novel occlusion-resistant three-dimensional (3D) facial identification system. We show that, under extreme occlusions due to hair, hands, and eyeglasses, typical 3D face recognition systems

  15. Motion and relativity

    CERN Document Server

    Infeld, Leopold

    1960-01-01

    Motion and Relativity focuses on the methodologies, solutions, and approaches involved in the study of motion and relativity, including the general relativity theory, gravitation, and approximation.The publication first offers information on notation and gravitational interaction and the general theory of motion. Discussions focus on the notation of the general relativity theory, field values on the world-lines, general statement of the physical problem, Newton's theory of gravitation, and forms for the equation of motion of the second kind. The text then takes a look at the approximation meth

  16. Brain Image Motion Correction

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus

    2015-01-01

    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  17. Mental rotation of anthropoid hands: a chronometric study

    Directory of Open Access Journals (Sweden)

    L.G. Gawryszewski

    2007-03-01

    Full Text Available It has been shown that mental rotation of objects and human body parts is processed differently in the human brain. But what about body parts belonging to other primates? Does our brain process this information like any other object or does it instead maximize the structural similarities with our homologous body parts? We tried to answer this question by measuring the manual reaction time (MRT of human participants discriminating the handedness of drawings representing the hands of four anthropoid primates (orangutan, chimpanzee, gorilla, and human. Twenty-four right-handed volunteers (13 males and 11 females were instructed to judge the handedness of a hand drawing in palm view by pressing a left/right key. The orientation of hand drawings varied from 0º (fingers upwards to 90º lateral (fingers pointing away from the midline, 180º (fingers downwards and 90º medial (finger towards the midline. The results showed an effect of rotation angle (F(3, 69 = 19.57, P < 0.001, but not of hand identity, on MRTs. Moreover, for all hand drawings, a medial rotation elicited shorter MRTs than a lateral rotation (960 and 1169 ms, respectively, P < 0.05. This result has been previously observed for drawings of the human hand and related to biomechanical constraints of movement performance. Our findings indicate that anthropoid hands are essentially equivalent stimuli for handedness recognition. Since the task involves mentally simulating the posture and rotation of the hands, we wondered if "mirror neurons" could be involved in establishing the motor equivalence between the stimuli and the participants' own hands.

  18. A Specific Role for Efferent Information in Self-Recognition

    Science.gov (United States)

    Tsakiris, M.; Haggard, P.; Franck, N.; Mainy, N.; Sirigu, A.

    2005-01-01

    We investigated the specific contribution of efferent information in a self-recognition task. Subjects experienced a passive extension of the right index finger, either as an effect of moving their left hand via a lever ('self-generated action'), or imposed externally by the experimenter ('externally-generated action'). The visual feedback was…

  19. Investigations of Hemispheric Specialization of Self-Voice Recognition

    Science.gov (United States)

    Rosa, Christine; Lassonde, Maryse; Pinard, Claudine; Keenan, Julian Paul; Belin, Pascal

    2008-01-01

    Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the…

  20. Relating the Content and Confidence of Recognition Judgments

    Science.gov (United States)

    Selmeczy, Diana; Dobbins, Ian G.

    2014-01-01

    The Remember/Know procedure, developed by Tulving (1985) to capture the distinction between the conscious correlates of episodic and semantic retrieval, has spawned considerable research and debate. However, only a handful of reports have examined the recognition content beyond this dichotomous simplification. To address this, we collected…

  1. Grip-pattern recognition: Applied to a smart gun

    NARCIS (Netherlands)

    Shang, X.

    2008-01-01

    In our work the verification performance of a biometric recognition system based on grip patterns, as part of a smart gun for use by the police ocers, has been investigated. The biometric features are extracted from a two-dimensional pattern of the pressure, exerted on the grip of a gun by the hand

  2. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality

    OpenAIRE

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-01-01

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and hea...

  3. Page Recognition: Quantum Leap In Recognition Technology

    Science.gov (United States)

    Miller, Larry

    1989-07-01

    No milestone has proven as elusive as the always-approaching "year of the LAN," but the "year of the scanner" might claim the silver medal. Desktop scanners have been around almost as long as personal computers. And everyone thinks they are used for obvious desktop-publishing and business tasks like scanning business documents, magazine articles and other pages, and translating those words into files your computer understands. But, until now, the reality fell far short of the promise. Because it's true that scanners deliver an accurate image of the page to your computer, but the software to recognize this text has been woefully disappointing. Old optical-character recognition (OCR) software recognized such a limited range of pages as to be virtually useless to real users. (For example, one OCR vendor specified 12-point Courier font from an IBM Selectric typewriter: the same font in 10-point, or from a Diablo printer, was unrecognizable!) Computer dealers have told me the chasm between OCR expectations and reality is so broad and deep that nine out of ten prospects leave their stores in disgust when they learn the limitations. And this is a very important, very unfortunate gap. Because the promise of recognition -- what people want it to do -- carries with it tremendous improvements in our productivity and ability to get tons of written documents into our computers where we can do real work with it. The good news is that a revolutionary new development effort has led to the new technology of "page recognition," which actually does deliver the promise we've always wanted from OCR. I'm sure every reader appreciates the breakthrough represented by the laser printer and page-makeup software, a combination so powerful it created new reasons for buying a computer. A similar breakthrough is happening right now in page recognition: the Macintosh (and, I must admit, other personal computers) equipped with a moderately priced scanner and OmniPage software (from Caere

  4. Motion control, motion sickness, and the postural dynamics of mobile devices.

    Science.gov (United States)

    Stoffregen, Thomas A; Chen, Yi-Chou; Koslucher, Frank C

    2014-04-01

    Drivers are less likely than passengers to experience motion sickness, an effect that is important for any theoretical account of motion sickness etiology. We asked whether different types of control would affect the incidence of motion sickness, and whether any such effects would be related to participants' control of their own bodies. Participants played a video game on a tablet computer. In the Touch condition, the device was stationary and participants controlled the game exclusively through fingertip inputs via the device's touch screen. In the Tilt condition, participants held the device in their hands and moved the device to control some game functions. Results revealed that the incidence of motion sickness was greater in the Touch condition than in the Tilt condition. During game play, movement of the head and torso differed as a function of the type of game control. Before the onset of subjective symptoms of motion sickness, movement of the head and torso differed between participants who later reported motion sickness and those that did not. We discuss implications of these results for theories of motion sickness etiology.

  5. A system of automatic speaker recognition on a minicomputer

    International Nuclear Information System (INIS)

    El Chafei, Cherif

    1978-01-01

    This study describes a system of automatic speaker recognition using the pitch of the voice. The pre-treatment consists in the extraction of the speakers' discriminating characteristics taken from the pitch. The programme of recognition gives, firstly, a preselection and then calculates the distance between the speaker's characteristics to be recognized and those of the speakers already recorded. An experience of recognition has been realized. It has been undertaken with 15 speakers and included 566 tests spread over an intermittent period of four months. The discriminating characteristics used offer several interesting qualities. The algorithms concerning the measure of the characteristics on one hand, the speakers' classification on the other hand, are simple. The results obtained in real time with a minicomputer are satisfactory. Furthermore they probably could be improved if we considered other speaker's discriminating characteristics but this was unfortunately not in our possibilities. (author) [fr

  6. fMRI-compatible rehabilitation hand device

    Directory of Open Access Journals (Sweden)

    Tzika Aria

    2006-10-01

    Full Text Available Abstract Background Functional magnetic resonance imaging (fMRI has been widely used in studying human brain functions and neurorehabilitation. In order to develop complex and well-controlled fMRI paradigms, interfaces that can precisely control and measure output force and kinematics of the movements in human subjects are needed. Optimized state-of-the-art fMRI methods, combined with magnetic resonance (MR compatible robotic devices for rehabilitation, can assist therapists to quantify, monitor, and improve physical rehabilitation. To achieve this goal, robotic or mechatronic devices with actuators and sensors need to be introduced into an MR environment. The common standard mechanical parts can not be used in MR environment and MR compatibility has been a tough hurdle for device developers. Methods This paper presents the design, fabrication and preliminary testing of a novel, one degree of freedom, MR compatible, computer controlled, variable resistance hand device that may be used in brain MR imaging during hand grip rehabilitation. We named the device MR_CHIROD (Magnetic Resonance Compatible Smart Hand Interfaced Rehabilitation Device. A novel feature of the device is the use of Electro-Rheological Fluids (ERFs to achieve tunable and controllable resistive force generation. ERFs are fluids that experience dramatic changes in rheological properties, such as viscosity or yield stress, in the presence of an electric field. The device consists of four major subsystems: a an ERF based resistive element; b a gearbox; c two handles and d two sensors, one optical encoder and one force sensor, to measure the patient induced motion and force. The smart hand device is designed to resist up to 50% of the maximum level of gripping force of a human hand and be controlled in real time. Results Laboratory tests of the device indicate that it was able to meet its design objective to resist up to approximately 50% of the maximum handgrip force. The detailed

  7. Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2012-07-01

    Full Text Available This paper presents a real-time remote robot teleoperation method using markerless Kinect-based hand tracking. Using this tracking algorithm, the positions of index finger and thumb in 3D can be estimated by processing depth images from Kinect. The hand pose is used as a model to specify the pose of a real-time remote robot's end-effector. This method provides a way to send a whole task to a remote robot instead of sending limited motion commands like gesture-based approaches and this method has been tested in pick-and-place tasks.

  8. Radiographic findings in wrists and hands of patients with leprosy

    International Nuclear Information System (INIS)

    Carreto, A.; Montero, F.; Garcia Frasquet, A.; Carpintero, P.

    1998-01-01

    Leprosy, like other neuropathic disorders, can involve the skeleton, affecting both bone and joints, especially those segments that have to withstand weight. To asses the osteoarticular involvement of the wrist and hand in 58 patients with leprosy. The radiographic images of wrist and hand of 58 patients with Hansen's disease were reviewed. The entire spectrum of specific and nonspecific bone lesions described in the literature is presented. Despite the fact that the upper limbs do not have to withstand the weight that the feet and ankles do, radiographic images show that gripping and other common motions can also produce lesions compatible with those of neuropathic arthropathy. (Author) 20 refs

  9. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones †

    Science.gov (United States)

    Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E

    2016-01-01

    As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy. PMID:27556461

  10. Multi-fingered robotic hand

    Science.gov (United States)

    Ruoff, Carl F. (Inventor); Salisbury, Kenneth, Jr. (Inventor)

    1990-01-01

    A robotic hand is presented having a plurality of fingers, each having a plurality of joints pivotally connected one to the other. Actuators are connected at one end to an actuating and control mechanism mounted remotely from the hand and at the other end to the joints of the fingers for manipulating the fingers and passing externally of the robot manipulating arm in between the hand and the actuating and control mechanism. The fingers include pulleys to route the actuators within the fingers. Cable tension sensing structure mounted on a portion of the hand are disclosed, as is covering of the tip of each finger with a resilient and pliable friction enhancing surface.

  11. COMPARISON OF BACKGROUND SUBTRACTION, SOBEL, ADAPTIVE MOTION DETECTION, FRAME DIFFERENCES, AND ACCUMULATIVE DIFFERENCES IMAGES ON MOTION DETECTION

    Directory of Open Access Journals (Sweden)

    Dara Incam Ramadhan

    2018-02-01

    Full Text Available Nowadays, digital image processing is not only used to recognize motionless objects, but also used to recognize motions objects on video. One use of moving object recognition on video is to detect motion, which implementation can be used on security cameras. Various methods used to detect motion have been developed so that in this research compared some motion detection methods, namely Background Substraction, Adaptive Motion Detection, Sobel, Frame Differences and Accumulative Differences Images (ADI. Each method has a different level of accuracy. In the background substraction method, the result obtained 86.1% accuracy in the room and 88.3% outdoors. In the sobel method the result of motion detection depends on the lighting conditions of the room being supervised. When the room is in bright condition, the accuracy of the system decreases and when the room is dark, the accuracy of the system increases with an accuracy of 80%. In the adaptive motion detection method, motion can be detected with a condition in camera visibility there is no object that is easy to move. In the frame difference method, testing on RBG image using average computation with threshold of 35 gives the best value. In the ADI method, the result of accuracy in motion detection reached 95.12%.

  12. The Importance of Spatiotemporal Information in Biological Motion Perception: White Noise Presented with a Step-like Motion Activates the Biological Motion Area.

    Science.gov (United States)

    Callan, Akiko; Callan, Daniel; Ando, Hiroshi

    2017-02-01

    Humans can easily recognize the motion of living creatures using only a handful of point-lights that describe the motion of the main joints (biological motion perception). This special ability to perceive the motion of animate objects signifies the importance of the spatiotemporal information in perceiving biological motion. The posterior STS (pSTS) and posterior middle temporal gyrus (pMTG) region have been established by many functional neuroimaging studies as a locus for biological motion perception. Because listening to a walking human also activates the pSTS/pMTG region, the region has been proposed to be supramodal in nature. In this study, we investigated whether the spatiotemporal information from simple auditory stimuli is sufficient to activate this biological motion area. We compared spatially moving white noise, having a running-like tempo that was consistent with biological motion, with stationary white noise. The moving-minus-stationary contrast showed significant differences in activation of the pSTS/pMTG region. Our results suggest that the spatiotemporal information of the auditory stimuli is sufficient to activate the biological motion area.

  13. The human hand as an inspiration for robot hand development

    CERN Document Server

    Santos, Veronica

    2014-01-01

    “The Human Hand as an Inspiration for Robot Hand Development” presents an edited collection of authoritative contributions in the area of robot hands. The results described in the volume are expected to lead to more robust, dependable, and inexpensive distributed systems such as those endowed with complex and advanced sensing, actuation, computation, and communication capabilities. The twenty-four chapters discuss the field of robotic grasping and manipulation viewed in light of the human hand’s capabilities and push the state-of-the-art in robot hand design and control. Topics discussed include human hand biomechanics, neural control, sensory feedback and perception, and robotic grasp and manipulation. This book will be useful for researchers from diverse areas such as robotics, biomechanics, neuroscience, and anthropologists.

  14. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary

  15. A synergy-driven approach to a myoelectric hand.

    Science.gov (United States)

    Godfrey, S B; Ajoudani, A; Catalano, M; Grioli, G; Bicchi, A

    2013-06-01

    In this paper, we present the Pisa/IIT SoftHand with myoelectric control as a synergy-driven approach for a prosthetic hand. Commercially available myoelectric hands are more expensive, heavier, and less robust than their body-powered counterparts; however, they can offer greater freedom of motion and a more aesthetically pleasing appearance. The Pisa/IIT SoftHand is built on the motor control principle of synergies through which the immense complexity of the hand is simplified into distinct motor patterns. As the SoftHand grasps, it follows a synergistic path with built-in flexibility to allow grasping of a wide variety of objects with a single motor. Here we test, as a proof-of-concept, 4 myoelectric controllers: a standard controller in which the EMG signal is used only as a position reference, an impedance controller that determines both position and stiffness references from the EMG input, a standard controller with vibrotactile force feedback, and finally a combined vibrotactile-impedance (VI) controller. Four healthy subjects tested the control algorithms by grasping various objects. All controllers were sufficient for basic grasping, however the impedance and vibrotactile controllers reduced the physical and cognitive load on the user, while the combined VI mode was the easiest to use of the four. While these results need to be validated with amputees, they suggest a low-cost, robust hand employing hardware-based synergies is a viable alternative to traditional myoelectric prostheses.

  16. Breaking cover: neural responses to slow and fast camouflage-breaking motion.

    Science.gov (United States)

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei

    2015-08-22

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.

  17. Projectile Motion Hoop Challenge

    Science.gov (United States)

    Jordan, Connor; Dunn, Amy; Armstrong, Zachary; Adams, Wendy K.

    2018-01-01

    Projectile motion is a common phenomenon that is used in introductory physics courses to help students understand motion in two dimensions. Authors have shared a range of ideas for teaching this concept and the associated kinematics in "The Physics Teacher" ("TPT"); however, the "Hoop Challenge" is a new setup not…

  18. Temporal logic motion planning

    CSIR Research Space (South Africa)

    Seotsanyana, M

    2010-01-01

    Full Text Available In this paper, a critical review on temporal logic motion planning is presented. The review paper aims to address the following problems: (a) In a realistic situation, the motion planning problem is carried out in real-time, in a dynamic, uncertain...

  19. Aristotle, Motion, and Rhetoric.

    Science.gov (United States)

    Sutton, Jane

    Aristotle rejects a world vision of changing reality as neither useful nor beneficial to human life, and instead he reaffirms both change and eternal reality, fuses motion and rest, and ends up with "well-behaved" changes. This concept of motion is foundational to his world view, and from it emerges his theory of knowledge, philosophy of…

  20. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.