WorldWideScience

Sample records for hand shape recognition

  1. Shape-based hand recognition approach using the morphological pattern spectrum

    Science.gov (United States)

    Ramirez-Cortes, Juan Manuel; Gomez-Gil, Pilar; Sanchez-Perez, Gabriel; Prieto-Castro, Cesar

    2009-01-01

    We propose the use of the morphological pattern spectrum, or pecstrum, as the base of a biometric shape-based hand recognition system. The system receives an image of the right hand of a subject in an unconstrained pose, which is captured with a commercial flatbed scanner. According to pecstrum property of invariance to translation and rotation, the system does not require the use of pegs for a fixed hand position, which simplifies the image acquisition process. This novel feature-extraction method is tested using a Euclidean distance classifier for identification and verification cases, obtaining 97% correct identification, and an equal error rate (EER) of 0.0285 (2.85%) for the verification mode. The obtained results indicate that the pattern spectrum represents a good feature-extraction alternative for low- and medium-level hand-shape-based biometric applications.

  2. Learning through hand- or typewriting influences visual recognition of new graphic shapes: behavioral and functional imaging evidence.

    Science.gov (United States)

    Longcamp, Marieke; Boucard, Céline; Gilhodes, Jean-Claude; Anton, Jean-Luc; Roth, Muriel; Nazarian, Bruno; Velay, Jean-Luc

    2008-05-01

    Fast and accurate visual recognition of single characters is crucial for efficient reading. We explored the possible contribution of writing memory to character recognition processes. We evaluated the ability of adults to discriminate new characters from their mirror images after being taught how to produce the characters either by traditional pen-and-paper writing or with a computer keyboard. After training, we found stronger and longer lasting (several weeks) facilitation in recognizing the orientation of characters that had been written by hand compared to those typed. Functional magnetic resonance imaging recordings indicated that the response mode during learning is associated with distinct pathways during recognition of graphic shapes. Greater activity related to handwriting learning and normal letter identification was observed in several brain regions known to be involved in the execution, imagery, and observation of actions, in particular, the left Broca's area and bilateral inferior parietal lobules. Taken together, these results provide strong arguments in favor of the view that the specific movements memorized when learning how to write participate in the visual recognition of graphic shapes and letters.

  3. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  4. Hand biometric recognition based on fused hand geometry and vascular patterns.

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  5. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  6. A New Profile Shape Matching Stereovision Algorithm for Real-time Human Pose and Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2014-02-01

    Full Text Available This paper presents a new profile shape matching stereovision algorithm that is designed to extract 3D information in real time. This algorithm obtains 3D information by matching profile intensity shapes of each corresponding row of the stereo image pair. It detects the corresponding matching patterns of the intensity profile rather than the intensity values of individual pixels or pixels in a small neighbourhood. This approach reduces the effect of the intensity and colour variations caused by lighting differences. As with all real-time vision algorithms, there is always a trade-off between accuracy and processing speed. This algorithm achieves a balance between the two to produce accurate results for real-time applications. To demonstrate its performance, the proposed algorithm is tested for human pose and hand gesture recognition to control a smart phone and an entertainment system.

  7. Combining heterogenous features for 3D hand-held object recognition

    Science.gov (United States)

    Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang

    2014-10-01

    Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.

  8. Computational intelligence in multi-feature visual pattern recognition hand posture and face recognition using biologically inspired approaches

    CERN Document Server

    Pisharady, Pramod Kumar; Poh, Loh Ai

    2014-01-01

    This book presents a collection of computational intelligence algorithms that addresses issues in visual pattern recognition such as high computational complexity, abundance of pattern features, sensitivity to size and shape variations and poor performance against complex backgrounds. The book has 3 parts. Part 1 describes various research issues in the field with a survey of the related literature. Part 2 presents computational intelligence based algorithms for feature selection and classification. The algorithms are discriminative and fast. The main application area considered is hand posture recognition. The book also discusses utility of these algorithms in other visual as well as non-visual pattern recognition tasks including face recognition, general object recognition and cancer / tumor classification. Part 3 presents biologically inspired algorithms for feature extraction. The visual cortex model based features discussed have invariance with respect to appearance and size of the hand, and provide good...

  9. A natural approach to convey numerical digits using hand activity recognition based on hand shape features

    Science.gov (United States)

    Chidananda, H.; Reddy, T. Hanumantha

    2017-06-01

    This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.

  10. Finger tips detection for two handed gesture recognition

    Science.gov (United States)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  11. Coordination of hand shape.

    Science.gov (United States)

    Pesyna, Colin; Pundi, Krishna; Flanders, Martha

    2011-03-09

    The neural control of hand movement involves coordination of the sensory, motor, and memory systems. Recent studies have documented the motor coordinates for hand shape, but less is known about the corresponding patterns of somatosensory activity. To initiate this line of investigation, the present study characterized the sense of hand shape by evaluating the influence of differences in the amount of grasping or twisting force, and differences in forearm orientation. Human subjects were asked to use the left hand to report the perceived shape of the right hand. In the first experiment, six commonly grasped items were arranged on the table in front of the subject: bottle, doorknob, egg, notebook, carton, and pan. With eyes closed, subjects used the right hand to lightly touch, forcefully support, or imagine holding each object, while 15 joint angles were measured in each hand with a pair of wired gloves. The forces introduced by supporting or twisting did not influence the perceptual report of hand shape, but for most objects, the report was distorted in a consistent manner by differences in forearm orientation. Subjects appeared to adjust the intrinsic joint angles of the left hand, as well as the left wrist posture, so as to maintain the imagined object in its proper spatial orientation. In a second experiment, this result was largely replicated with unfamiliar objects. Thus, somatosensory and motor information appear to be coordinated in an object-based, spatial-coordinate system, sensitive to orientation relative to gravitational forces, but invariant to grasp forcefulness.

  12. Towards NIRS-based hand movement recognition.

    Science.gov (United States)

    Paleari, Marco; Luciani, Riccardo; Ariano, Paolo

    2017-07-01

    This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.

  13. Real-Time Hand Posture Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  14. Hand Shape Affects Access to Memories

    NARCIS (Netherlands)

    K. Dijkstra (Katinka); M.P. Kaschak; R.A. Zwaan (Rolf)

    2008-01-01

    textabstractThe present study examined the ways that body posture facilitated retrieval of autobiographical memories in more detail by focusing on two aspects of congruence in position of a specific body part: hand shape and hand orientation. Hand shape is important in the tactile perception and

  15. Exemplar Based Recognition of Visual Shapes

    DEFF Research Database (Denmark)

    Olsen, Søren I.

    2005-01-01

    This paper presents an approach of visual shape recognition based on exemplars of attributed keypoints. Training is performed by storing exemplars of keypoints detected in labeled training images. Recognition is made by keypoint matching and voting according to the labels for the matched keypoint....... The matching is insensitive to rotations, limited scalings and small deformations. The recognition is robust to noise, background clutter and partial occlusion. Recognition is possible from few training images and improve with the number of training images.......This paper presents an approach of visual shape recognition based on exemplars of attributed keypoints. Training is performed by storing exemplars of keypoints detected in labeled training images. Recognition is made by keypoint matching and voting according to the labels for the matched keypoints...

  16. Hand-Geometry Recognition Based on Contour Parameters

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Bazen, A.M.; Booij, W.D.T.; Hendrikse, A.J.; Jain, A.K.; Ratha, N.K.

    This paper demonstrates the feasibility of a new method of hand-geometry recognition based on parameters derived from the contour of the hand. The contour is completely determined by the black-and-white image of the hand and can be derived from it by means of simple image-processing techniques. It

  17. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-04-01

    Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.

  18. Hand posture effects on handedness recognition as revealed by the Simon effect

    Directory of Open Access Journals (Sweden)

    Allan P Lameira

    2009-11-01

    Full Text Available We investigated the influence of hand posture in handedness recognition, while varying the spatial correspondence between stimulus and response in a modified Simon task. Drawings of the left and right hands were displayed either in a back or palm view while participants discriminated stimulus handedness by pressing left/right keys with their hands resting either in a prone or supine posture. As a control, subjects performed a regular Simon task using simple geometric shapes as stimuli. Results showed that when hands were in a prone posture, the spatially corresponding trials (i.e., stimulus and response located on the same side were faster than the non-corresponding trials (i.e., stimulus and response on opposite sides. In contrast, for the supine posture, there was no difference between corresponding and non-corresponding trials. The control experiment with the regular Simon task showed that the posture of the responding hand had no influence on performance. When the stimulus is the drawing of a hand, however, the posture of the responding hand affects the spatial correspondence effect because response location is coded based on multiple reference points, including the body of the hand.

  19. Hand Gesture Recognition with Leap Motion

    OpenAIRE

    Du, Youchen; Liu, Shenglan; Feng, Lin; Chen, Menghui; Wu, Jie

    2017-01-01

    The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly. This paper proposes a novel hand gesture recognition system with Leap Motion Controller. A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multi-class SVM classifier to recognize performed gesture, dimension reduction and feature weight...

  20. Hand gesture recognition by analysis of codons

    Science.gov (United States)

    Ramachandra, Poornima; Shrikhande, Neelima

    2007-09-01

    The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.

  1. Body schema and corporeal self-recognition in the alien hand syndrome.

    Science.gov (United States)

    Olgiati, Elena; Maravita, Angelo; Spandri, Viviana; Casati, Roberta; Ferraro, Francesco; Tedesco, Lucia; Agostoni, Elio Clemente; Bolognini, Nadia

    2017-07-01

    The alien hand syndrome (AHS) is a rare neuropsychological disorder characterized by involuntary, yet purposeful, hand movements. Patients with the AHS typically complain about a loss of agency associated with a feeling of estrangement for actions performed by the affected limb. The present study explores the integrity of the body representation in AHS, focusing on 2 main processes: multisensory integration and visual self-recognition of body parts. Three patients affected by AHS following a right-hemisphere stroke, with clinical symptoms akin to the posterior variant of AHS, were tested and their performance was compared with that of 18 age-matched healthy controls. AHS patients and controls underwent 2 experimental tasks: a same-different visual matching task for body postures, which assessed the ability of using your own body schema for encoding others' body postural changes (Experiment 1), and an explicit self-hand recognition task, which assessed the ability to visually recognize your own hands (Experiment 2). As compared to controls, all AHS patients were unable to access a reliable multisensory representation of their alien hand and use it for decoding others' postural changes; however, they could rely on an efficient multisensory representation of their intact (ipsilesional) hand. Two AHS patients also presented with a specific impairment in the visual self-recognition of their alien hand, but normal recognition of their intact hand. This evidence suggests that the AHS following a right-hemisphere stroke may involve a disruption of the multisensory representation of the alien limb; instead, self-hand recognition mechanisms may be spared. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  3. Shape descriptors for mode-shape recognition and model updating

    International Nuclear Information System (INIS)

    Wang, W; Mottershead, J E; Mares, C

    2009-01-01

    The most widely used method for comparing mode shapes from finite elements and experimental measurements is the Modal Assurance Criterion (MAC), which returns a single numerical value and carries no explicit information on shape features. New techniques, based on image processing (IP) and pattern recognition (PR) are described in this paper. The Zernike moment descriptor (ZMD), Fourier descriptor (FD), and wavelet descriptor (WD), presented in this article, are the most popular shape descriptors having properties that include efficiency of expression, robustness to noise, invariance to geometric transformation and rotation, separation of local and global shape features and computational efficiency. The comparison of mode shapes is readily achieved by assembling the shape features of each mode shape into multi-dimensional shape feature vectors (SFVs) and determining the distances separating them.

  4. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  5. An Approach for Pattern Recognition of EEG Applied in Prosthetic Hand Drive

    Directory of Open Access Journals (Sweden)

    Xiao-Dong Zhang

    2011-12-01

    Full Text Available For controlling the prosthetic hand by only electroencephalogram (EEG, it has become the hot spot in robotics research to set up a direct communication and control channel between human brain and prosthetic hand. In this paper, the EEG signal is analyzed based on multi-complicated hand activities. And then, two methods of EEG pattern recognition are investigated, a neural prosthesis hand system driven by BCI is set up, which can complete four kinds of actions (arm’s free state, arm movement, hand crawl, hand open. Through several times of off-line and on-line experiments, the result shows that the neural prosthesis hand system driven by BCI is reasonable and feasible, the C-support vector classifiers-based method is better than BP neural network on the EEG pattern recognition for multi-complicated hand activities.

  6. Using virtual data for training deep model for hand gesture recognition

    Science.gov (United States)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  7. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  8. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  9. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  10. Hand gesture recognition in confined spaces with partial observability and occultation constraints

    Science.gov (United States)

    Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.

  11. Electromyographic Grasp Recognition for a Five Fingered Robotic Hand

    Directory of Open Access Journals (Sweden)

    Nayan M. Kakoty

    2012-09-01

    Full Text Available This paper presents classification of grasp types based on surface electromyographic signals. Classification is through radial basis function kernel support vector machine using sum of wavelet decomposition coefficients of the EMG signals. In a study involving six subjects, we achieved an average recognition rate of 86%. The electromyographic grasp recognition together with a 8-bit microcontroller has been employed to control a fivefingered robotic hand to emulate six grasp types used during 70% daily living activities.

  12. Shape Recognition Inputs to Figure-Ground Organization in Three-Dimensional Displays.

    Science.gov (United States)

    Peterson, Mary A.; Gibson, Bradley S.

    1993-01-01

    Three experiments with 29 college students and 8 members of a university community demonstrate that shape recognition processes influence perceived figure-ground relationships in 3-dimensional displays when the edge between 2 potential figural regions is both a luminance contrast edge and a disparity edge. Implications for shape recognition and…

  13. An Analysis of Intrinsic and Extrinsic Hand Muscle EMG for Improved Pattern Recognition Control.

    Science.gov (United States)

    Adewuyi, Adenike A; Hargrove, Levi J; Kuiken, Todd A

    2016-04-01

    Pattern recognition control combined with surface electromyography (EMG) from the extrinsic hand muscles has shown great promise for control of multiple prosthetic functions for transradial amputees. There is, however, a need to adapt this control method when implemented for partial-hand amputees, who possess both a functional wrist and information-rich residual intrinsic hand muscles. We demonstrate that combining EMG data from both intrinsic and extrinsic hand muscles to classify hand grasps and finger motions allows up to 19 classes of hand grasps and individual finger motions to be decoded, with an accuracy of 96% for non-amputees and 85% for partial-hand amputees. We evaluated real-time pattern recognition control of three hand motions in seven different wrist positions. We found that a system trained with both intrinsic and extrinsic muscle EMG data, collected while statically and dynamically varying wrist position increased completion rates from 73% to 96% for partial-hand amputees and from 88% to 100% for non-amputees when compared to a system trained with only extrinsic muscle EMG data collected in a neutral wrist position. Our study shows that incorporating intrinsic muscle EMG data and wrist motion can significantly improve the robustness of pattern recognition control for application to partial-hand prosthetic control.

  14. "Like the palm of my hands": Motor imagery enhances implicit and explicit visual recognition of one's own hands.

    Science.gov (United States)

    Conson, Massimiliano; Volpicella, Francesco; De Bellis, Francesco; Orefice, Agnese; Trojano, Luigi

    2017-10-01

    A key point in motor imagery literature is that judging hands in palm view recruits sensory-motor information to a higher extent than judging hands in back view, due to the greater biomechanical complexity implied in rotating hands depicted from palm than from back. We took advantage from this solid evidence to test the nature of a phenomenon known as self-advantage, i.e. the advantage in implicitly recognizing self vs. others' hand images. The self-advantage has been actually found when implicitly but not explicitly judging self-hands, likely due to dissociation between implicit and explicit body representations. However, such a finding might be related to the extent to which motor imagery is recruited during implicit and explicit processing of hand images. We tested this hypothesis in two behavioural experiments. In Experiment 1, right-handed participants judged laterality of either self or others' hands, whereas in Experiment 2, an explicit recognition of one's own hands was required. Crucially, in both experiments participants were randomly presented with hand images viewed from back or from palm. The main result of both experiments was the self-advantage when participants judged hands from palm view. This novel finding demonstrate that increasing the "motor imagery load" during processing of self vs. others' hands can elicit a self-advantage in explicit recognition tasks as well. Future studies testing the possible dissociation between implicit and explicit visual body representations should take into account the modulatory effect of motor imagery load on self-hand processing. Copyright © 2017. Published by Elsevier B.V.

  15. Optical-electronic shape recognition system based on synergetic associative memory

    Science.gov (United States)

    Gao, Jun; Bao, Jie; Chen, Dingguo; Yang, Youqing; Yang, Xuedong

    2001-04-01

    This paper presents a novel optical-electronic shape recognition system based on synergetic associative memory. Our shape recognition system is composed of two parts: the first one is feature extraction system; the second is synergetic pattern recognition system. Hough transform is proposed for feature extraction of unrecognized object, with the effects of reducing dimensions and filtering for object distortion and noise, synergetic neural network is proposed for realizing associative memory in order to eliminate spurious states. Then we adopt an approach of optical- electronic realization to our system that can satisfy the demands of real time, high speed and parallelism. In order to realize fast algorithm, we replace the dynamic evolution circuit with adjudge circuit according to the relationship between attention parameters and order parameters, then implement the recognition of some simple images and its validity is proved.

  16. Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition.

    Science.gov (United States)

    Lu, Zhiyuan; Chen, Xiang; Zhang, Xu; Tong, Kay-Yu; Zhou, Ping

    2017-08-01

    Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.

  17. Optimizing pattern recognition-based control for partial-hand prosthesis application.

    Science.gov (United States)

    Earley, Eric J; Adewuyi, Adenike A; Hargrove, Levi J

    2014-01-01

    Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system's ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (pgrasps available to the classifier significantly decrease classification error (pgrasp.

  18. A biometric authentication model using hand gesture images.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok

    2013-10-30

    A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.

  19. Probabilistic models for 2D active shape recognition using Fourier descriptors and mutual information

    CSIR Research Space (South Africa)

    Govender, N

    2014-08-01

    Full Text Available information to improve the initial shape recognition results. We propose an initial system which performs shape recognition using the euclidean distances of Fourier descriptors. To improve upon these results we build multinomial and Gaussian probabilistic...

  20. Mechanical design of a shape memory alloy actuated prosthetic hand.

    Science.gov (United States)

    De Laurentis, Kathryn J; Mavroidis, Constantinos

    2002-01-01

    This paper presents the mechanical design for a new five fingered, twenty degree-of-freedom dexterous hand patterned after human anatomy and actuated by Shape Memory Alloy artificial muscles. Two experimental prototypes of a finger, one fabricated by traditional means and another fabricated by rapid prototyping techniques, are described and used to evaluate the design. An important aspect of the Rapid Prototype technique used here is that this multi-articulated hand will be fabricated in one step, without requiring assembly, while maintaining its desired mobility. The use of Shape Memory Alloy actuators combined with the rapid fabrication of the non-assembly type hand, reduce considerably its weight and fabrication time. Therefore, the focus of this paper is the mechanical design of a dexterous hand that combines Rapid Prototype techniques and smart actuators. The type of robotic hand described in this paper can be utilized for applications requiring low weight, compactness, and dexterity such as prosthetic devices, space and planetary exploration.

  1. Spherical blurred shape model for 3-D object and pose recognition: quantitative analysis and HCI applications in smart environments.

    Science.gov (United States)

    Lopes, Oscar; Reyes, Miguel; Escalera, Sergio; Gonzàlez, Jordi

    2014-12-01

    The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.

  2. Personal recognition using finger knuckle shape oriented features and texture analysis

    Directory of Open Access Journals (Sweden)

    K. Usha

    2016-10-01

    Full Text Available Finger knuckle print is considered as one of the emerging hand biometric traits due to its potentiality toward the identification of individuals. This paper contributes a new method for personal recognition using finger knuckle print based on two approaches namely, geometric and texture analyses. In the first approach, the shape oriented features of the finger knuckle print are extracted by means of angular geometric analysis and then integrated to achieve better precision rate. Whereas, the knuckle texture feature analysis is carried out by means of multi-resolution transform known as Curvelet transform. This Curvelet transform has the ability to approximate curved singularities with minimum number of Curvelet coefficients. Since, finger knuckle patterns mainly consist of lines and curves, Curvelet transform is highly suitable for its representation. Further, the Curvelet transform decomposes the finger knuckle image into Curvelet sub-bands which are termed as ‘Curvelet knuckle’. Finally, principle component analysis is applied on each Curvelet knuckle for extracting its feature vector through the covariance matrix derived from their Curvelet coefficients. Extensive experiments were conducted using PolyU database and IIT finger knuckle database. The experimental results confirm that, our proposed method shows a high recognition rate of 98.72% with lower false acceptance rate of 0.06%.

  3. A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies.

    Science.gov (United States)

    Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca

    2017-04-15

    Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.

  4. Finger-Shaped GelForce: Sensor for Measuring Surface Traction Fields for Robotic Hand.

    Science.gov (United States)

    Sato, K; Kamiyama, K; Kawakami, N; Tachi, S

    2010-01-01

    It is believed that the use of haptic sensors to measure the magnitude, direction, and distribution of a force will enable a robotic hand to perform dexterous operations. Therefore, we develop a new type of finger-shaped haptic sensor using GelForce technology. GelForce is a vision-based sensor that can be used to measure the distribution of force vectors, or surface traction fields. The simple structure of the GelForce enables us to develop a compact finger-shaped GelForce for the robotic hand. GelForce that is developed on the basis of an elastic theory can be used to calculate surface traction fields using a conversion equation. However, this conversion equation cannot be analytically solved when the elastic body of the sensor has a complicated shape such as the shape of a finger. Therefore, we propose an observational method and construct a prototype of the finger-shaped GelForce. By using this prototype, we evaluate the basic performance of the finger-shaped GelForce. Then, we conduct a field test by performing grasping operations using a robotic hand. The results of this test show that using the observational method, the finger-shaped GelForce can be successfully used in a robotic hand.

  5. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Impact of body posture on laterality judgement and explicit recognition tasks performed on self and others' hands.

    Science.gov (United States)

    Conson, Massimiliano; Errico, Domenico; Mazzarella, Elisabetta; De Bellis, Francesco; Grossi, Dario; Trojano, Luigi

    2015-04-01

    Judgments on laterality of hand stimuli are faster and more accurate when dealing with one's own than others' hand, i.e. the self-advantage. This advantage seems to be related to activation of a sensorimotor mechanism while implicitly processing one's own hands, but not during explicit one's own hand recognition. Here, we specifically tested the influence of proprioceptive information on the self-hand advantage by manipulating participants' body posture during self and others' hand processing. In Experiment 1, right-handed healthy participants judged laterality of either self or others' hands, whereas in Experiment 2, an explicit recognition of one's own hands was required. In both experiments, the participants performed the task while holding their left or right arm flexed with their hand in direct contact with their chest ("flexed self-touch posture") or with their hand placed on a wooden smooth surface in correspondence with their chest ("flexed proprioceptive-only posture"). In an "extended control posture", both arms were extended and in contact with thighs. In Experiment 1 (hand laterality judgment), we confirmed the self-advantage and demonstrated that it was enhanced when the subjects judged left-hand stimuli at 270° orientation while keeping their left arm in the flexed proprioceptive-only posture. In Experiment 2 (explicit self-hand recognition), instead, we found an advantage for others' hand ("self-disadvantage") independently from posture manipulation. Thus, position-related proprioceptive information from left non-dominant arm can enhance sensorimotor one's own body representation selectively favouring implicit self-hands processing.

  7. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  8. Static sign language recognition using 1D descriptors and neural networks

    Science.gov (United States)

    Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César

    2012-10-01

    A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.

  9. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    Science.gov (United States)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  10. Implement of Shape Memory Alloy Actuators in a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Daniel Amariei

    2006-10-01

    Full Text Available This paper was conceived to present the ideology of utilizing advanced actuators to design and develop innovative, lightweight, powerful, compact, and as much as possible dexterous robotic hands. The key to satisfying these objectives is the use of Shape Memory Alloys (SMAs to power the joints of the robotic hand. The mechanical design of a dexterous robotic hand, which utilizes non-classical types of actuation and information obtained from the study of biological systems, is presented in this paper. The type of robotic hand described in this paper will be utilized for applications requiring low weight, power, compactness, and dexterity.

  11. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  12. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Circular blurred shape model for multiclass symbol recognition.

    Science.gov (United States)

    Escalera, Sergio; Fornés, Alicia; Pujol, Oriol; Lladós, Josep; Radeva, Petia

    2011-04-01

    In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.

  14. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    Science.gov (United States)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  15. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  16. Transformation of hand-shape features for a biometric identification approach.

    Science.gov (United States)

    Travieso, Carlos M; Briceño, Juan Carlos; Alonso, Jesús B

    2012-01-01

    The present work presents a biometric identification system for hand shape identification. The different contours have been coded based on angular descriptions forming a Markov chain descriptor. Discrete Hidden Markov Models (DHMM), each representing a target identification class, have been trained with such chains. Features have been calculated from a kernel based on the HMM parameter descriptors. Finally, supervised Support Vector Machines were used to classify parameters from the DHMM kernel. First, the system was modelled using 60 users to tune the DHMM and DHMM_kernel+SVM configuration parameters and finally, the system was checked with the whole database (GPDS database, 144 users with 10 samples per class). Our experiments have obtained similar results in both cases, demonstrating a scalable, stable and robust system. Our experiments have achieved an upper success rate of 99.87% for the GPDS database using three hand samples per class in training mode, and seven hand samples in test mode. Secondly, the authors have verified their algorithms using another independent and public database (the UST database). Our approach has reached 100% and 99.92% success for right and left hand, respectively; showing the robustness and independence of our algorithms. This success was found using as features the transformation of 100 points hand shape with our DHMM kernel, and as classifier Support Vector Machines with linear separating functions, with similar success.

  17. Transformation of Hand-Shape Features for a Biometric Identification Approach

    Directory of Open Access Journals (Sweden)

    Jesús B. Alonso

    2012-01-01

    Full Text Available The present work presents a biometric identification system for hand shape identification. The different contours have been coded based on angular descriptions forming a Markov chain descriptor. Discrete Hidden Markov Models (DHMM, each representing a target identification class, have been trained with such chains. Features have been calculated from a kernel based on the HMM parameter descriptors. Finally, supervised Support Vector Machines were used to classify parameters from the DHMM kernel. First, the system was modelled using 60 users to tune the DHMM and DHMM_kernel+SVM configuration parameters and finally, the system was checked with the whole database (GPDS database, 144 users with 10 samples per class. Our experiments have obtained similar results in both cases, demonstrating a scalable, stable and robust system. Our experiments have achieved an upper success rate of 99.87% for the GPDS database using three hand samples per class in training mode, and seven hand samples in test mode. Secondly, the authors have verified their algorithms using another independent and public database (the UST database. Our approach has reached 100% and 99.92% success for right and left hand, respectively; showing the robustness and independence of our algorithms. This success was found using as features the transformation of 100 points hand shape with our DHMM kernel, and as classifier Support Vector Machines with linear separating functions, with similar success.

  18. The Effect of Hand Dimensions, Hand Shape and Some Anthropometric Characteristics on Handgrip Strength in Male Grip Athletes and Non-Athletes

    Science.gov (United States)

    Fallahi, Ali Asghar; Jadidian, Ali Akbar

    2011-01-01

    It has been suggested that athletes with longer fingers and larger hand surfaces enjoy stronger grip power. Therefore, some researchers have examined a number of factors and anthropometric variables that explain this issue. To our knowledge, the data is scarce. Thus, the aim of this study was to investigate the effect of hand dimensions, hand shape and some anthropometric characteristics on handgrip strength in male grip athletes and non-athletes. 80 subjects aged between 19 and 29 participated in this study in two groups including: national and collegian grip athletes (n=40), and non-athletes (n=40). Body height and mass were measured to calculate body mass index. The shape of the dominant hand was drawn on a piece of paper with a thin marker so that finger spans, finger lengths, and perimeters of the hand could be measured. The hand shape was estimated as the ratio of the hand width to hand length. Handgrip strength was measured in the dominant and non-dominant hand using a standard dynamometer. Descriptive statistics were used for each variable and independent t test was used to analyze the differences between the two groups. The Pearson correlation coefficient test was used to evaluate the correlation between studied variables. Also, to predict important variables in handgrip strength, the linear trend was assessed using a linear regression analysis. There was a significant difference between the two groups in absolute handgrip strength (p0.05) were significantly different between the groups (ptalent identification in handgrip-related sports and in clinical settings as well. PMID:23486361

  19. Two-dimensional shape recognition using oriented-polar representation

    Science.gov (United States)

    Hu, Neng-Chung; Yu, Kuo-Kan; Hsu, Yung-Li

    1997-10-01

    To deal with such a problem as object recognition of position, scale, and rotation invariance (PSRI), we utilize some PSRI properties of images obtained from objects, for example, the centroid of the image. The corresponding position of the centroid to the boundary of the image is invariant in spite of rotation, scale, and translation of the image. To obtain the information of the image, we use the technique similar to Radon transform, called the oriented-polar representation of a 2D image. In this representation, two specific points, the centroid and the weighted mean point, are selected to form an initial ray, then the image is sampled with N angularly equispaced rays departing from the initial rays. Each ray contains a number of intersections and the distance information obtained from the centroid to the intersections. The shape recognition algorithm is based on the least total error of these two items of information. Together with a simple noise removal and a typical backpropagation neural network, this algorithm is simple, but the PSRI is achieved with a high recognition rate.

  20. Aesthetic preference recognition of 3D shapes using EEG.

    Science.gov (United States)

    Chew, Lin Hou; Teo, Jason; Mountstephens, James

    2016-04-01

    Recognition and identification of aesthetic preference is indispensable in industrial design. Humans tend to pursue products with aesthetic values and make buying decisions based on their aesthetic preferences. The existence of neuromarketing is to understand consumer responses toward marketing stimuli by using imaging techniques and recognition of physiological parameters. Numerous studies have been done to understand the relationship between human, art and aesthetics. In this paper, we present a novel preference-based measurement of user aesthetics using electroencephalogram (EEG) signals for virtual 3D shapes with motion. The 3D shapes are designed to appear like bracelets, which is generated by using the Gielis superformula. EEG signals were collected by using a medical grade device, the B-Alert X10 from advance brain monitoring, with a sampling frequency of 256 Hz and resolution of 16 bits. The signals obtained when viewing 3D bracelet shapes were decomposed into alpha, beta, theta, gamma and delta rhythm by using time-frequency analysis, then classified into two classes, namely like and dislike by using support vector machines and K-nearest neighbors (KNN) classifiers respectively. Classification accuracy of up to 80 % was obtained by using KNN with the alpha, theta and delta rhythms as the features extracted from frontal channels, Fz, F3 and F4 to classify two classes, like and dislike.

  1. Shape recognition contributions to figure-ground reversal: which route counts?

    Science.gov (United States)

    Peterson, M A; Harvey, E M; Weidenbacher, H J

    1991-11-01

    Observers viewed upright and inverted versions of figure-ground stimuli, in which Gestalt variables specified that the center was figure. In upright versions, the surround was high in denotivity, in that most viewers agreed it depicted the same shape; in inverted versions, the surround was low in denotivity. The surround was maintained as figure longer and was more likely to be obtained as figure when the stimuli were upright rather than inverted. In four experiments, these effects reflected inputs to figure-ground computations from orientation-specific shape representations only. To account for these findings, a nonratiomorphic mechanism is proposed that enables shape recognition processes before figure-ground relationships are determined.

  2. Visual shape recognition in crayfish as revealed by habituation.

    Directory of Open Access Journals (Sweden)

    Cinzia Chiandetti

    2017-08-01

    Full Text Available To cope with the everyday challenges that they encounter in their evolutionary niche, crayfish are considered to rely mainly on chemical information or, alternatively, on tactile information, but not much on vision. Hence, research has focused on chemical communication, whereas crayfish visual abilities remain poorly understood and investigated. To fill in this gap, we tested whether crayfish (Procambarus clarkii can distinguish between two different visual shapes matched in terms of luminance. To this aim, we measured both the habituation response to a repeated presentation of a given shape, a downright Y, and the response recovery when a novel shape was presented. The novel shape could be either a Möbius or the same Y-shape but upright rotated. Our results demonstrate that, after habituation to the downright Y, crayfish showed a significantly higher response recovery to the Möbius as compared to the upright rotated Y. Hence, besides relying on chemo-haptic information, we found that crayfish can use sight alone to discriminate between different abstract geometrical shapes when macroscopically different. Failure to discriminate between the downright Y and its inversion or a generalization from the presence of a shape with three points creating a simple category, are both likely parsimonious explanations that should be investigated systematically in further studies. A future challenge will be understanding whether crayfish are capable of generalized shape recognition.

  3. On the feasibility of interoperable schemes in hand biometrics.

    Science.gov (United States)

    Morales, Aythami; González, Ester; Ferrer, Miguel A

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.

  4. On the Feasibility of Interoperable Schemes in Hand Biometrics

    Science.gov (United States)

    Morales, Aythami; González, Ester; Ferrer, Miguel A.

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714

  5. An implicit spatiotemporal shape model for human activity localization and recognition

    NARCIS (Netherlands)

    Oikonomopoulos, A.; Patras, I.; Pantic, Maja

    2009-01-01

    In this paper we address the problem of localisation and recognition of human activities in unsegmented image sequences. The main contribution of the proposed method is the use of an implicit representation of the spatiotemporal shape of the activity which relies on the spatiotemporal localization

  6. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    Science.gov (United States)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  7. Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View.

    Science.gov (United States)

    Bambach, Sven; Crandall, David J; Yu, Chen

    2015-11-01

    Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer's everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction.

  8. Left hand tactile agnosia after posterior callosal lesion.

    Science.gov (United States)

    Balsamo, Maddalena; Trojano, Luigi; Giamundo, Arcangelo; Grossi, Dario

    2008-09-01

    We report a patient with a hemorrhagic lesion encroaching upon the posterior third of the corpus callosum but sparing the splenium. She showed marked difficulties in recognizing objects and shapes perceived through her left hand, while she could appreciate elementary sensorial features of items tactually presented to the same hand flawlessly. This picture, corresponding to classical descriptions of unilateral associative tactile agnosia, was associated with finger agnosia of the left hand. This very unusual case report can be interpreted as an instance of disconnection syndrome, and allows a discussion of mechanisms involved in tactile object recognition.

  9. Digital and optical shape representation and pattern recognition; Proceedings of the Meeting, Orlando, FL, Apr. 4-6, 1988

    Science.gov (United States)

    Juday, Richard D. (Editor)

    1988-01-01

    The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.

  10. A Novel Phonology- and Radical-Coded Chinese Sign Language Recognition Framework Using Accelerometer and Surface Electromyography Sensors.

    Science.gov (United States)

    Cheng, Juan; Chen, Xun; Liu, Aiping; Peng, Hu

    2015-09-15

    Sign language recognition (SLR) is an important communication tool between the deaf and the external world. It is highly necessary to develop a worldwide continuous and large-vocabulary-scale SLR system for practical usage. In this paper, we propose a novel phonology- and radical-coded Chinese SLR framework to demonstrate the feasibility of continuous SLR using accelerometer (ACC) and surface electromyography (sEMG) sensors. The continuous Chinese characters, consisting of coded sign gestures, are first segmented into active segments using EMG signals by means of moving average algorithm. Then, features of each component are extracted from both ACC and sEMG signals of active segments (i.e., palm orientation represented by the mean and variance of ACC signals, hand movement represented by the fixed-point ACC sequence, and hand shape represented by both the mean absolute value (MAV) and autoregressive model coefficients (ARs)). Afterwards, palm orientation is first classified, distinguishing "Palm Downward" sign gestures from "Palm Inward" ones. Only the "Palm Inward" gestures are sent for further hand movement and hand shape recognition by dynamic time warping (DTW) algorithm and hidden Markov models (HMM) respectively. Finally, component recognition results are integrated to identify one certain coded gesture. Experimental results demonstrate that the proposed SLR framework with a vocabulary scale of 223 characters can achieve an averaged recognition accuracy of 96.01% ± 0.83% for coded gesture recognition tasks and 92.73% ± 1.47% for character recognition tasks. Besides, it demonstrats that sEMG signals are rather consistent for a given hand shape independent of hand movements. Hence, the number of training samples will not be significantly increased when the vocabulary scale increases, since not only the number of the completely new proposed coded gestures is constant and limited, but also the transition movement which connects successive signs needs no

  11. Endodontic shaping performance using nickel-titanium hand and motor ProTaper systems by novice dental students.

    Science.gov (United States)

    Tu, Ming-Gene; Chen, San-Yue; Huang, Heng-Li; Tsai, Chi-Cheng

    2008-05-01

    Preparing a continuous tapering conical shape and maintaining the original shape of a canal are obligatory in root canal preparation. The purpose of this study was to compare the shaping performance in simulated curved canal resin blocks of the same novice dental students using hand-prepared and engine-driven nickel-titanium (NiTi) rotary ProTaper instruments in an endodontic laboratory class. Twenty-three fourth-year dental students attending China Medical University Dental School prepared 46 simulated curved canals in resin blocks with two types of NiTi rotary systems: hand and motor ProTaper files. Composite images were prepared for estimation. Material removed, canal width and canal deviation were measured at five levels in the apical 4 mm of the simulated curved canals using AutoCAD 2004 software. Data were analyzed using Wilcoxon's rank-sum test. The hand ProTaper group cut significantly wider than the motor rotary ProTaper group in the outer wall, except for the apical 0 mm point. The total canal width was cut significantly larger in the hand group than in the motor group. There was no significant difference between the two groups in centering canal shape, except at the 3 mm level. These findings show that the novice students prepared the simulated curved canal that deviated more outwardly from apical 1 mm to 4 mm using the hand ProTaper. The ability to maintain the original curvature was better in the motor rotary ProTaper group than in the hand ProTaper group. Undergraduate students, if following the preparation sequence carefully, could successfully perform canal shaping by motor ProTaper files and achieve better root canal geometry than by using hand ProTaper files within the same teaching and practicing sessions.

  12. Linguistic approach to object recognition by grasping

    Energy Technology Data Exchange (ETDEWEB)

    Marik, V

    1982-01-01

    A method for recognizing both the three-dimensional object shapes and their sizes by grasping them with an antropomorphic five-finger artificial hand is described. The hand is equipped with position sensing elements in the joints of the fingers and with a tactile transducer net on the palm surface. The linguistic method uses formal grammars and languages for the pattern description. The recognition is hierarchically arranged, every level being different from the others by a formal language which has been used. On every level the pattern description is generated and verified from the symmetrical and semantical points of view. The results of the implementation of the recognition of cones, pyramides, spheres, prisms and cylinders are presented and discussed. 8 references.

  13. Robotic Hand-Assisted Training for Spinal Cord Injury Driven by Myoelectric Pattern Recognition: A Case Report.

    Science.gov (United States)

    Lu, Zhiyuan; Tong, Kai-Yu; Shin, Henry; Stampas, Argyrios; Zhou, Ping

    2017-10-01

    A 51-year-old man with an incomplete C6 spinal cord injury sustained 26 yrs ago attended twenty 2-hr visits over 10 wks for robot-assisted hand training driven by myoelectric pattern recognition. In each visit, his right hand was assisted to perform motions by an exoskeleton robot, while the robot was triggered by his own motion intentions. The hand robot was designed for this study, which can perform six kinds of motions, including hand closing/opening; thumb, index finger, and middle finger closing/opening; and middle, ring, and little fingers closing/opening. After the training, his grip force increased from 13.5 to 19.6 kg, his pinch force remained the same (5.0 kg), his score of Box and Block test increased from 32 to 39, and his score from the Graded Redefined Assessment of Strength, Sensibility, and Prehension test Part 4.B increased from 22 to 24. He accomplished the tasks in the Graded Redefined Assessment of Strength, Sensibility, and Prehension test Part 4.B 28.8% faster on average. The results demonstrate the feasibility and effectiveness of robot-assisted training driven by myoelectric pattern recognition after spinal cord injury.

  14. Shape-estimation of human hand using polymer flex sensor and study of its application to control robot arm

    International Nuclear Information System (INIS)

    Lee, Jin Hyuck; Kim, Dae Hyun

    2015-01-01

    Ultrasonic inspection robot systems have been widely researched and developed for the real-time monitoring of structures such as power plants. However, an inspection robot that is operated in a simple pattern has limitations in its application to various structures in a plant facility because of the diverse and complicated shapes of the inspection objects. Therefore, accurate control of the robot is required to inspect complicated objects with high-precision results. This paper presents the idea that the shape and movement information of an ultrasonic inspector's hand could be profitably utilized for the accurate control of robot. In this study, a polymer flex sensor was applied to monitor the shape of a human hand. This application was designed to intuitively control an ultrasonic inspection robot. The movement and shape of the hand were estimated by applying multiple sensors. Moreover, it was successfully shown that a test robot could be intuitively controlled based on the shape of a human hand estimated using polymer flex sensors.

  15. Shape-estimation of human hand using polymer flex sensor and study of its application to control robot arm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin Hyuck; Kim, Dae Hyun [Seoul National University of Technology, Seoul (Korea, Republic of)

    2015-02-15

    Ultrasonic inspection robot systems have been widely researched and developed for the real-time monitoring of structures such as power plants. However, an inspection robot that is operated in a simple pattern has limitations in its application to various structures in a plant facility because of the diverse and complicated shapes of the inspection objects. Therefore, accurate control of the robot is required to inspect complicated objects with high-precision results. This paper presents the idea that the shape and movement information of an ultrasonic inspector's hand could be profitably utilized for the accurate control of robot. In this study, a polymer flex sensor was applied to monitor the shape of a human hand. This application was designed to intuitively control an ultrasonic inspection robot. The movement and shape of the hand were estimated by applying multiple sensors. Moreover, it was successfully shown that a test robot could be intuitively controlled based on the shape of a human hand estimated using polymer flex sensors.

  16. On the Feasibility of Interoperable Schemes in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Miguel A. Ferrer

    2012-02-01

    Full Text Available Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.

  17. Bringing transcranial mapping into shape: Sulcus-aligned mapping captures motor somatotopy in human primary motor hand area

    DEFF Research Database (Denmark)

    Raffin, Estelle; Pellegrino, Giovanni; Di Lazzaro, Vincenzo

    2015-01-01

    Motor representations express some degree of somatotopy in human primary motor hand area (M1HAND), but within-M1HAND corticomotor somatotopy has been difficult to study with transcranial magnetic stimulation (TMS). Here we introduce a “linear” TMS mapping approach based on the individual shape...... of the central sulcus to obtain mediolateral corticomotor excitability profiles of the abductor digiti minimi (ADM) and first dorsal interosseus (FDI) muscles. In thirteen young volunteers, we used stereotactic neuronavigation to stimulate the right M1HAND with a small eight-shaped coil at 120% of FDI resting...

  18. Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Hee-Deok Yang

    2014-12-01

    Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

  19. Body shape-based biometric recognition using millimeter wave images

    OpenAIRE

    González-Sosa, Ester; Vera-Rodríguez, Rubén; Fiérrez, Julián; Ortega-García, Javier

    2013-01-01

    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. González-Sosa, E. ; Vera-Rodríguez, R. ; Fierrez, J. ; Ortega-García, J. "Body shape-based biometric recognition using millime...

  20. The incidental binding of color and shape is insensitive to the perceptual load

    Directory of Open Access Journals (Sweden)

    Hugo Cezar Palhares Ferreira

    2016-01-01

    Full Text Available Abstract The binding of information in visual short-term memory may occur incidentally when irrelevant information for the task at hand is stored together with relevant information. We investigated the process of the incidental conjunction of color and shape (Exp1 and its potential association with the selection of relevant information to the memory task (Exp2. The results in Exp1 show that color and shape are incidentally and asymmetrically conjugated: color interferes with the recognition of shape; however, shape does not interfere with the recognition of color. In Exp2, we investigated whether an increase in perceptual load would eliminate the processing of irrelevant information. The results of this experiment show that even with a high perceptual load, the incidental conjunction is not affected, and color remains to interfere with shape recognition, suggesting that the incidental conjunction is an automatic process.

  1. 3D palmprint and hand imaging system based on full-field composite color sinusoidal fringe projection technique.

    Science.gov (United States)

    Zhang, Zonghua; Huang, Shujun; Xu, Yongjia; Chen, Chao; Zhao, Yan; Gao, Nan; Xiao, Yanjun

    2013-09-01

    Palmprint and hand shape, as two kinds of important biometric characteristics, have been widely studied and applied to human identity recognition. The existing research is based mainly on 2D images, which lose the third-dimensional information. The biological features extracted from 2D images are distorted by pressure and rolling, so the subsequent feature matching and recognition are inaccurate. This paper presents a method to acquire accurate 3D shapes of palmprint and hand by projecting full-field composite color sinusoidal fringe patterns and the corresponding color texture information. A 3D imaging system is designed to capture and process the full-field composite color fringe patterns on hand surface. Composite color fringe patterns having the optimum three fringe numbers are generated by software and projected onto the surface of human hand by a digital light processing projector. From another viewpoint, a color CCD camera captures the deformed fringe patterns and saves them for postprocessing. After compensating for the cross talk and chromatic aberration between color channels, three fringe patterns are extracted from three color channels of a captured composite color image. Wrapped phase information can be calculated from the sinusoidal fringe patterns with high precision. At the same time, the absolute phase of each pixel is determined by the optimum three-fringe selection method. After building up the relationship between absolute phase map and 3D shape data, the 3D palmprint and hand are obtained. Color texture information can be directly captured or demodulated from the captured composite fringe pattern images. Experimental results show that the proposed method and system can yield accurate 3D shape and color texture information of the palmprint and hand shape.

  2. Compensatory motor control after stroke: an alternative joint strategy for object-dependent shaping of hand posture.

    Science.gov (United States)

    Raghavan, Preeti; Santello, Marco; Gordon, Andrew M; Krakauer, John W

    2010-06-01

    Efficient grasping requires planned and accurate coordination of finger movements to approximate the shape of an object before contact. In healthy subjects, hand shaping is known to occur early in reach under predominantly feedforward control. In patients with hemiparesis after stroke, execution of coordinated digit motion during grasping is impaired as a result of damage to the corticospinal tract. The question addressed here is whether patients with hemiparesis are able to compensate for their execution deficit with a qualitatively different grasp strategy that still allows them to differentiate hand posture to object shape. Subjects grasped a rectangular, concave, and convex object while wearing an instrumented glove. Reach-to-grasp was divided into three phases based on wrist kinematics: reach acceleration (reach onset to peak horizontal wrist velocity), reach deceleration (peak horizontal wrist velocity to reach offset), and grasp (reach offset to lift-off). Patients showed reduced finger abduction, proximal interphalangeal joint (PIP) flexion, and metacarpophalangeal joint (MCP) extension at object grasp across all three shapes compared with controls; however, they were able to partially differentiate hand posture for the convex and concave shapes using a compensatory strategy that involved increased MCP flexion rather than the PIP flexion seen in controls. Interestingly, shape-specific hand postures did not unfold initially during reach acceleration as seen in controls, but instead evolved later during reach deceleration, which suggests increased reliance on sensory feedback. These results indicate that kinematic analysis can identify and quantify within-limb compensatory motor control strategies after stroke. From a clinical perspective, quantitative study of compensation is important to better understand the process of recovery from brain injury. From a motor control perspective, compensation can be considered a model for how joint redundancy is exploited

  3. Effects of Isometric Hand-Grip Muscle Contraction on Young Adults' Free Recall and Recognition Memory

    Science.gov (United States)

    Tomporowski, Phillip D.; Albrecht, Chelesa; Pendleton, Daniel M.

    2017-01-01

    Purpose: The purpose of this study was to determine if physical arousal produced by isometric hand-dynamometer contraction performed during word-list learning affects young adults' free recall or recognition memory. Method: Twenty-four young adults (12 female; M[subscript age] = 22 years) were presented with 4 20-item word lists. Moderate arousal…

  4. An observational study of implicit motor imagery using laterality recognition of the hand after stroke.

    Science.gov (United States)

    Amesz, Sarah; Tessari, Alessia; Ottoboni, Giovanni; Marsden, Jon

    2016-01-01

    To explore the relationship between laterality recognition after stroke and impairments in attention, 3D object rotation and functional ability. Observational cross-sectional study. Acute care teaching hospital. Thirty-two acute and sub-acute people with stroke and 36 healthy, age-matched controls. Laterality recognition, attention and mental rotation of objects. Within the stroke group, the relationship between laterality recognition and functional ability, neglect, hemianopia and dyspraxia were further explored. People with stroke were significantly less accurate (69% vs 80%) and showed delayed reaction times (3.0 vs 1.9 seconds) when determining the laterality of a pictured hand. Deficits either in accuracy or reaction times were seen in 53% of people with stroke. The accuracy of laterality recognition was associated with reduced functional ability (R(2) = 0.21), less accurate mental rotation of objects (R(2) = 0.20) and dyspraxia (p = 0.03). Implicit motor imagery is affected in a significant number of patients after stroke with these deficits related to lesions to the motor networks as well as other deficits seen after stroke. This research provides new insights into how laterality recognition is related to a number of other deficits after stroke, including the mental rotation of 3D objects, attention and dyspraxia. Further research is required to determine if treatment programmes can improve deficits in laterality recognition and impact functional outcomes after stroke.

  5. How a hobby can shape cognition: visual word recognition in competitive Scrabble players.

    Science.gov (United States)

    Hargreaves, Ian S; Pexman, Penny M; Zdrazilova, Lenka; Sargious, Peter

    2012-01-01

    Competitive Scrabble is an activity that involves extraordinary word recognition experience. We investigated whether that experience is associated with exceptional behavior in the laboratory in a classic visual word recognition paradigm: the lexical decision task (LDT). We used a version of the LDT that involved horizontal and vertical presentation and a concreteness manipulation. In Experiment 1, we presented this task to a group of undergraduates, as these participants are the typical sample in word recognition studies. In Experiment 2, we compared the performance of a group of competitive Scrabble players with a group of age-matched nonexpert control participants. The results of a series of cognitive assessments showed that the Scrabble players and control participants differed only in Scrabble-specific skills (e.g., anagramming). Scrabble expertise was associated with two specific effects (as compared to controls): vertical fluency (relatively less difficulty judging lexicality for words presented in the vertical orientation) and semantic deemphasis (smaller concreteness effects for word responses). These results suggest that visual word recognition is shaped by experience, and that with experience there are efficiencies to be had even in the adult word recognition system.

  6. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    Science.gov (United States)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  7. Pure associative tactile agnosia for the left hand: clinical and anatomo-functional correlations.

    Science.gov (United States)

    Veronelli, Laura; Ginex, Valeria; Dinacci, Daria; Cappa, Stefano F; Corbo, Massimo

    2014-09-01

    Associative tactile agnosia (TA) is defined as the inability to associate information about object sensory properties derived through tactile modality with previously acquired knowledge about object identity. The impairment is often described after a lesion involving the parietal cortex (Caselli, 1997; Platz, 1996). We report the case of SA, a right-handed 61-year-old man affected by first ever right hemispheric hemorrhagic stroke. The neurological examination was normal, excluding major somaesthetic and motor impairment; a brain magnetic resonance imaging (MRI) confirmed the presence of a right subacute hemorrhagic lesion limited to the post-central and supra-marginal gyri. A comprehensive neuropsychological evaluation detected a selective inability to name objects when handled with the left hand in the absence of other cognitive deficits. A series of experiments were conducted in order to assess each stage of tactile recognition processing using the same stimulus sets: materials, 3D geometrical shapes, real objects and letters. SA and seven matched controls underwent the same experimental tasks during four sessions in consecutive days. Tactile discrimination, recognition, pantomime, drawing after haptic exploration out of vision and tactile-visual matching abilities were assessed. In addition, we looked for the presence of a supra-modal impairment of spatial perception and of specific difficulties in programming exploratory movements during recognition. Tactile discrimination was intact for all the stimuli tested. In contrast, SA was able neither to recognize nor to pantomime real objects manipulated with the left hand out of vision, while he identified them with the right hand without hesitations. Tactile-visual matching was intact. Furthermore, SA was able to grossly reproduce the global shape in drawings but failed to extract details of objects after left-hand manipulation, and he could not identify objects after looking at his own drawings. This case

  8. Kinect-based sign language recognition of static and dynamic hand movements

    Science.gov (United States)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  9. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  10. Universal Robot Hand Equipped with Tactile and Joint Torque Sensors: Development and Experiments on Stiffness Control and Object Recognition

    Directory of Open Access Journals (Sweden)

    Hiroyuki NAKAMOTO

    2007-04-01

    Full Text Available Various humanoid robots have been developed and multifunction robot hands which are able to attach those robots like human hand is needed. But a useful robot hand has not been depeveloped, because there are a lot of problems such as control method of many degrees of freedom and processing method of enormous sensor outputs. Realizing such robot hand, we have developed five-finger robot hand. In this paper, the detailed structure of developed robot hand is described. The robot hand we developed has five fingers of multi-joint that is equipped with joint torque sensors and tactile sensors. We report experimental results of a stiffness control with the developed robot hand. Those results show that it is possible to change the stiffness of joints. Moreover we propose an object recognition method with the tactile sensor. The validity of that method is assured by experimental results.

  11. Endodontic Shaping Performance Using Nickel–Titanium Hand and Motor ProTaper Systems by Novice Dental Students

    OpenAIRE

    Tu, Ming-Gene; Chen, San-Yue; Huang, Heng-Li; Tsai, Chi-Cheng

    2008-01-01

    Preparing a continuous tapering conical shape and maintaining the original shape of a canal are obligatory in root canal preparation. The purpose of this study was to compare the shaping performance in simulated curved canal resin blocks of the same novice dental students using hand-prepared and engine-driven nickel–titanium (NiTi) rotary ProTaper instruments in an endodontic laboratory class. Methods: Twenty-three fourth-year dental students attending China Medical University Dental Schoo...

  12. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Regina Lionnie

    2013-09-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing  methods  used  in  a  hand  gesture  recognition  system.  The  preprocessing methods are based on the combinations ofseveral image processing operations,  namely  edge  detection,  low  pass  filtering,  histogram  equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possibleclasses. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  13. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  14. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-01-01

    estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized

  15. Endodontic Shaping Performance Using Nickel–Titanium Hand and Motor ProTaper Systems by Novice Dental Students

    Directory of Open Access Journals (Sweden)

    Ming-Gene Tu

    2008-05-01

    Conclusion: These findings show that the novice students prepared the simulated curved canal that deviated more outwardly from apical 1 mm to 4 mm using the hand ProTaper. The ability to maintain the original curvature was better in the motor rotary ProTaper group than in the hand ProTaper group. Undergraduate students, if following the preparation sequence carefully, could successfully perform canal shaping by motor ProTaper files and achieve better root canal geometry than by using hand ProTaper files within the same teaching and practicing sessions.

  16. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming.

    Science.gov (United States)

    Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara

    2010-03-01

    We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.

  17. Synthesis and anion recognition properties of shape-persistent binaphthyl-containing chiral macrocyclic amides

    Directory of Open Access Journals (Sweden)

    Marco Caricato

    2012-06-01

    Full Text Available We report on the synthesis and characterization of novel shape-persistent, optically active arylamide macrocycles, which can be obtained using a one-pot methodology. Resolved, axially chiral binol scaffolds, which incorporate either methoxy or acetoxy functionalities in the 2,2' positions and carboxylic functionalities in the external 3,3' positions, were used as the source of chirality. Two of these binaphthyls are joined through amidation reactions using rigid diaryl amines of differing shapes, to give homochiral tetraamidic macrocycles. The recognition properties of these supramolecular receptors have been analyzed, and the results indicate a modulation of binding affinities towards dicarboxylate anions, with a drastic change of binding mode depending on the steric and electronic features of the functional groups in the 2,2' positions.

  18. Rapid prototyping prosthetic hand acting by a low-cost shape-memory-alloy actuator.

    Science.gov (United States)

    Soriano-Heras, Enrique; Blaya-Haro, Fernando; Molino, Carlos; de Agustín Del Burgo, José María

    2018-06-01

    The purpose of this article is to develop a new concept of modular and operative prosthetic hand based on rapid prototyping and a novel shape-memory-alloy (SMA) actuator, thus minimizing the manufacturing costs. An underactuated mechanism was needed for the design of the prosthesis to use only one input source. Taking into account the state of the art, an underactuated mechanism prosthetic hand was chosen so as to implement the modifications required for including the external SMA actuator. A modular design of a new prosthesis was developed which incorporated a novel SMA actuator for the index finger movement. The primary objective of the prosthesis is achieved, obtaining a modular and functional low-cost prosthesis based on additive manufacturing executed by a novel SMA actuator. The external SMA actuator provides a modular system which allows implementing it in different systems. This paper combines rapid prototyping and a novel SMA actuator to develop a new concept of modular and operative low-cost prosthetic hand.

  19. Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Michalis Papakostas

    2017-06-01

    Full Text Available Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features.

  20. Compensatory Motor Control After Stroke: An Alternative Joint Strategy for Object-Dependent Shaping of Hand Posture

    OpenAIRE

    Raghavan, Preeti; Santello, Marco; Gordon, Andrew M.; Krakauer, John W.

    2010-01-01

    Efficient grasping requires planned and accurate coordination of finger movements to approximate the shape of an object before contact. In healthy subjects, hand shaping is known to occur early in reach under predominantly feedforward control. In patients with hemiparesis after stroke, execution of coordinated digit motion during grasping is impaired as a result of damage to the corticospinal tract. The question addressed here is whether patients with hemiparesis are able to compensate for th...

  1. Parametric Primitives for Hand Gesture Recognition

    DEFF Research Database (Denmark)

    Baby, Sanmohan; Krüger, Volker

    2009-01-01

    Imitation learning is considered to be an effective way of teaching humanoid robots and action recognition is the key step to imitation learning. In this paper  an online algorithm to recognize parametric actions with object context is presented. Objects are key instruments in understanding...

  2. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  3. Hybrid gesture recognition system for short-range use

    Science.gov (United States)

    Minagawa, Akihiro; Fan, Wei; Katsuyama, Yutaka; Takebe, Hiroaki; Ozawa, Noriaki; Hotta, Yoshinobu; Sun, Jun

    2012-03-01

    In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the camera's field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.

  4. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    Science.gov (United States)

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.

  5. Welcome to wonderland: the influence of the size and shape of a virtual hand on the perceived size and shape of virtual objects.

    Science.gov (United States)

    Linkenauger, Sally A; Leyrer, Markus; Bülthoff, Heinrich H; Mohler, Betty J

    2013-01-01

    The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver's hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants' fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals' estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants' virtual hands rather than another avatar's hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.

  6. Welcome to wonderland: the influence of the size and shape of a virtual hand on the perceived size and shape of virtual objects.

    Directory of Open Access Journals (Sweden)

    Sally A Linkenauger

    Full Text Available The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver's hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants' fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals' estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants' virtual hands rather than another avatar's hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.

  7. End-Stop Exemplar Based Recognition

    DEFF Research Database (Denmark)

    Olsen, Søren I.

    2003-01-01

    An approach to exemplar based recognition of visual shapes is presented. The shape information is described by attributed interest points (keys) detected by an end-stop operator. The attributes describe the statistics of lines and edges local to the interest point, the position of neighboring int...... interest points, and (in the training phase) a list of recognition names. Recognition is made by a simple voting procedure. Preliminary experiments indicate that the recognition is robust to noise, small deformations, background clutter and partial occlusion....

  8. When Passive Feels Active--Delusion-Proneness Alters Self-Recognition in the Moving Rubber Hand Illusion.

    Science.gov (United States)

    Louzolo, Anaïs; Kalckert, Andreas; Petrovic, Predrag

    2015-01-01

    Psychotic patients have problems with bodily self-recognition such as the experience of self-produced actions (sense of agency) and the perception of the body as their own (sense of ownership). While it has been shown that such impairments in psychotic patients can be explained by hypersalient processing of external sensory input it has also been suggested that they lack normal efference copy in voluntary action. However, it is not known how problems with motor predictions like efference copy contribute to impaired sense of agency and ownership in psychosis or psychosis-related states. We used a rubber hand illusion based on finger movements and measured sense of agency and ownership to compute a bodily self-recognition score in delusion-proneness (indexed by Peters' Delusion Inventory - PDI). A group of healthy subjects (n=71) experienced active movements (involving motor predictions) or passive movements (lacking motor predictions). We observed a highly significant correlation between delusion-proneness and self-recognition in the passive conditions, while no such effect was observed in the active conditions. This was seen for both ownership and agency scores. The result suggests that delusion-proneness is associated with hypersalient external input in passive conditions, resulting in an abnormal experience of the illusion. We hypothesize that this effect is not present in the active condition because deficient motor predictions counteract hypersalience in psychosis proneness.

  9. Person Recognition Method using Sequential Walking Footprints via Overlapped Foot Shape and Center-Of-Pressure Trajectory

    Directory of Open Access Journals (Sweden)

    Jin-Woo Jung

    2013-08-01

    Full Text Available One emerging biometric identification method is the use of human footprint. However, in the previous research, there were some limitations resulting from the spatial resolution of sensors. One possible method to overcome this limitation is through the use additional information such as dynamic walking information in sequential walking footprint. In this study, we suggest a new person recognition scheme based on both overlapped foot shape and COP (Center Of Pressure trajectory during one-step walking. And, we show the usefulness of the suggested method, obtaining a 98.6% recognition rate in our experiment with eleven people. In addition, we show an application of the suggested method, automatic door-opening system for intelligent residential space.

  10. Hands-off and hands-on casting consistency of amputee below knee sockets using magnetic resonance imaging.

    Science.gov (United States)

    Safari, Mohammad Reza; Rowe, Philip; McFadyen, Angus; Buis, Arjan

    2013-01-01

    Residual limb shape capturing (Casting) consistency has a great influence on the quality of socket fit. Magnetic Resonance Imaging was used to establish a reliable reference grid for intercast and intracast shape and volume consistency of two common casting methods, Hands-off and Hands-on. Residual limbs were cast for twelve people with a unilateral below knee amputation and scanned twice for each casting concept. Subsequently, all four volume images of each amputee were semiautomatically segmented and registered to a common coordinate system using the tibia and then the shape and volume differences were calculated. The results show that both casting methods have intra cast volume consistency and there is no significant volume difference between the two methods. Inter- and intracast mean volume differences were not clinically significant based on the volume of one sock criteria. Neither the Hands-off nor the Hands-on method resulted in a consistent residual limb shape as the coefficient of variation of shape differences was high. The resultant shape of the residual limb in the Hands-off casting was variable but the differences were not clinically significant. For the Hands-on casting, shape differences were equal to the maximum acceptable limit for a poor socket fit.

  11. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    Science.gov (United States)

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology. (c) 2015 APA, all rights reserved).

  12. Recognition and use of line drawings by children with severe intellectual disabilities: the effects of color and outline shape.

    Science.gov (United States)

    Stephenson, Jennifer

    2009-03-01

    Communication symbols for students with severe intellectual disabilities often take the form of computer-generated line drawings. This study investigated the effects of the match between color and shape of line drawings and the objects they represented on drawing recognition and use. The match or non-match between color and shape of the objects and drawings did not have an effect on participants' ability to match drawings to objects, or to use drawings to make choices.

  13. Automatic anatomy recognition via multiobject oriented active shape models.

    Science.gov (United States)

    Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A

    2010-12-01

    This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a

  14. Challenges and Specifications for Robust Face and Gait Recognition Systems for Surveillance Application

    Directory of Open Access Journals (Sweden)

    BUCIU Ioan

    2014-05-01

    Full Text Available Automated person recognition (APR based on biometric signals addresses the process of automatically recognize a person according to his physiological traits (face, voice, iris, fingerprint, ear shape, body odor, electroencephalogram – EEG, electrocardiogram, or hand geometry, or behavioural patterns (gait, signature, hand-grip, lip movement. The paper aims at briefly presenting the current challenges for two specific non-cooperative biometric approaches, namely face and gait biometrics as well as approaches that consider combination of the two in the attempt of a more robust system for accurate APR, in the context of surveillance application. Open problems from both sides are also pointed out.

  15. LGBT Family Lawyers and Same-Sex Marriage Recognition: How Legal Change Shapes Professional Identity and Practice.

    Science.gov (United States)

    Baumle, Amanda K

    2018-01-10

    Lawyers who practice family law for LGBT clients are key players in the tenuous and evolving legal environment surrounding same-sex marriage recognition. Building on prior research on factors shaping the professional identities of lawyers generally, and activist lawyers specifically, I examine how practice within a rapidly changing, patchwork legal environment shapes professional identity for this group of lawyers. I draw on interviews with 21 LGBT family lawyers to analyze how the unique features of LGBT family law shape their professional identities and practice, as well as their predictions about the development of the practice in a post-Obergefell world. Findings reveal that the professional identities and practice of LGBT family lawyers are shaped by uncertainty, characteristics of activist lawyering, community membership, and community service. Individual motivations and institutional forces work to generate a professional identity that is resilient and dynamic, characterized by skepticism and distrust coupled with flexibility and creativity. These features are likely to play a role in the evolution of the LGBT family lawyer professional identity post-marriage equality.

  16. Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition

    Science.gov (United States)

    Khayat, Omid; Afarideh, Hossein

    2013-04-01

    Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.

  17. Personal authentication through dorsal hand vein patterns

    Science.gov (United States)

    Hsu, Chih-Bin; Hao, Shu-Sheng; Lee, Jen-Chun

    2011-08-01

    Biometric identification is an emerging technology that can solve security problems in our networked society. A reliable and robust personal verification approach using dorsal hand vein patterns is proposed in this paper. The characteristic of the approach needs less computational and memory requirements and has a higher recognition accuracy. In our work, the near-infrared charge-coupled device (CCD) camera is adopted as an input device for capturing dorsal hand vein images, it has the advantages of the low-cost and noncontact imaging. In the proposed approach, two finger-peaks are automatically selected as the datum points to define the region of interest (ROI) in the dorsal hand vein images. The modified two-directional two-dimensional principal component analysis, which performs an alternate two-dimensional PCA (2DPCA) in the column direction of images in the 2DPCA subspace, is proposed to exploit the correlation of vein features inside the ROI between images. The major advantage of the proposed method is that it requires fewer coefficients for efficient dorsal hand vein image representation and recognition. The experimental results on our large dorsal hand vein database show that the presented schema achieves promising performance (false reject rate: 0.97% and false acceptance rate: 0.05%) and is feasible for dorsal hand vein recognition.

  18. Enhanced visuo-haptic integration for the non-dominant hand.

    Science.gov (United States)

    Yalachkov, Yavor; Kaiser, Jochen; Doehrmann, Oliver; Naumer, Marcus J

    2015-07-21

    Visuo-haptic integration contributes essentially to object shape recognition. Although there has been a considerable advance in elucidating the neural underpinnings of multisensory perception, it is still unclear whether seeing an object and exploring it with the dominant hand elicits the same brain response as compared to the non-dominant hand. Using fMRI to measure brain activation in right-handed participants, we found that for both left- and right-hand stimulation the left lateral occipital complex (LOC) and anterior cerebellum (aCER) were involved in visuo-haptic integration of familiar objects. These two brain regions were then further investigated in another study, where unfamiliar, novel objects were presented to a different group of right-handers. Here the left LOC and aCER were more strongly activated by bimodal than unimodal stimuli only when the left but not the right hand was used. A direct comparison indicated that the multisensory gain of the fMRI activation was significantly higher for the left than the right hand. These findings are in line with the principle of "inverse effectiveness", implying that processing of bimodally presented stimuli is particularly enhanced when the unimodal stimuli are weak. This applies also when right-handed subjects see and simultaneously touch unfamiliar objects with their non-dominant left hand. Thus, the fMRI signal in the left LOC and aCER induced by visuo-haptic stimulation is dependent on which hand was employed for haptic exploration. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Side-View Face Recognition

    NARCIS (Netherlands)

    Santemiz, P.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier

    As a widely used biometrics, face recognition has many advantages such as being non-intrusive, natural and passive. On the other hand, in real-life scenarios with uncontrolled environment, pose variation up to side-view positions makes face recognition a challenging work. In this paper we discuss

  20. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  1. Homozygous mutations in IHH cause acrocapitofemoral dysplasia, an autosomal recessive disorder with cone- shaped epiphyses in hands and hips

    NARCIS (Netherlands)

    Hellemans, J; Coucke, PJ; Giedion, A; De Paepe, A; Kramer, P; Beemer, F; Mortier, GR

    Acrocapitofemoral dysplasia is a recently delineated autosomal recessive skeletal dysplasia, characterized clinically by short stature with short limbs and radiographically by cone-shaped epiphyses, mainly in hands and hips. Genome-wide homozygosity mapping in two consanguineous families linked the

  2. Earlier and greater hand pre-shaping in the elderly: a study based on kinematic analysis of reaching movements to grasp objects.

    Science.gov (United States)

    Tamaru, Yoshiki; Naito, Yasuo; Nishikawa, Takashi

    2017-11-01

    Elderly people are less able to manipulate objects skilfully than young adults. Although previous studies have examined age-related deterioration of hand movements with a focus on the phase after grasping objects, the changes in the reaching phase have not been studied thus far. We aimed to examine whether changes in hand shape patterns during the reaching phase of grasping movements differ between young adults and the elderly. Ten healthy elderly adults and 10 healthy young adults were examined using the Simple Test for Evaluating Hand Functions and kinetic analysis of hand pre-shaping reach-to-grasp tasks. The results were then compared between the two groups. For kinetic analysis, we measured the time of peak tangential velocity of the wrist and the inter-fingertip distance (the distance between the tips of the thumb and index finger) at different time points. The results showed that the elderly group's performance on the Simple Test for Evaluating Hand Functions was significantly lower than that of the young adult group, irrespective of whether the dominant or non-dominant hand was used, indicating deterioration of hand movement in the elderly. The peak tangential velocity of the wrist in either hand appeared significantly earlier in the elderly group than in the young adult group. The elderly group also showed larger inter-fingertip distances with arch-like fingertip trajectories compared to the young adult group for all object sizes. To perform accurate prehension, elderly people have an earlier peak tangential velocity point than young adults. This allows for a longer adjustment time for reaching and grasping movements and for reducing errors in object prehension by opening the hand and fingers wider. Elderly individuals gradually modify their strategy based on previous successes and failures during daily living to compensate for their decline in dexterity and operational capabilities. © 2017 Japanese Psychogeriatric Society.

  3. Sketch Style Recognition, Transfer and Synthesis of Hand-Drawn Sketches

    KAUST Repository

    Shaheen, Sara

    2017-07-19

    Humans have always used sketches to explain the visual world. It is a simple and straight- forward mean to communicate new ideas and designs. Consequently, as in almost every aspect of our modern life, the relatively recent major developments in computer science have highly contributed to enhancing individual sketching experience. The literature of sketch related research has witnessed seminal advancements and a large body of interest- ing work. Following up with this rich literature, this dissertation provides a holistic study on sketches through three proposed novel models including sketch analysis, transfer, and geometric representation. The first part of the dissertation targets sketch authorship recognition and analysis of sketches. It provides answers to the following questions: Are simple strokes unique to the artist or designer who renders them? If so, can this idea be used to identify authorship or to classify artistic drawings? The proposed stroke authorship recognition approach is a novel method that distinguishes the authorship of 2D digitized drawings. This method converts a drawing into a histogram of stroke attributes that is discriminative of authorship. Extensive classification experiments on a large variety of datasets are conducted to validate the ability of the proposed techniques to distinguish unique authorship of artists and designers. The second part of the dissertation is concerned with sketch style transfer from one free- hand drawing to another. The proposed method exploits techniques from multi-disciplinary areas including geometrical modeling and image processing. It consists of two methods of transfer: stroke-style and brush-style transfer. (1) Stroke-style transfer aims to transfer the style of the input sketch at the stroke level to the style encountered in other sketches by other artists. This is done by modifying all the parametric stroke segments in the input, so as to minimize a global stroke-level distance between the input and

  4. Finger Angle-Based Hand Gesture Recognition for Smart Infrastructure Using Wearable Wrist-Worn Camera

    Directory of Open Access Journals (Sweden)

    Feiyu Chen

    2018-03-01

    Full Text Available The arising of domestic robots in smart infrastructure has raised demands for intuitive and natural interaction between humans and robots. To address this problem, a wearable wrist-worn camera (WwwCam is proposed in this paper. With the capability of recognizing human hand gestures in real-time, it enables services such as controlling mopping robots, mobile manipulators, or appliances in smart-home scenarios. The recognition is based on finger segmentation and template matching. Distance transformation algorithm is adopted and adapted to robustly segment fingers from the hand. Based on fingers’ angles relative to the wrist, a finger angle prediction algorithm and a template matching metric are proposed. All possible gesture types of the captured image are first predicted, and then evaluated and compared to the template image to achieve the classification. Unlike other template matching methods relying highly on large training set, this scheme possesses high flexibility since it requires only one image as the template, and can classify gestures formed by different combinations of fingers. In the experiment, it successfully recognized ten finger gestures from number zero to nine defined by American Sign Language with an accuracy up to 99.38%. Its performance was further demonstrated by manipulating a robot arm using the implemented algorithms and WwwCam to transport and pile up wooden building blocks.

  5. Dexterous hand gestures recognition based on low-density sEMG signals for upper-limb forearm amputees

    Directory of Open Access Journals (Sweden)

    John Jairo Villarejo Mayor

    2017-08-01

    Full Text Available Abstract Introduction Intuitive prosthesis control is one of the most important challenges in order to reduce the user effort in learning how to use an artificial hand. This work presents the development of a novel method for pattern recognition of sEMG signals able to discriminate, in a very accurate way, dexterous hand and fingers movements using a reduced number of electrodes, which implies more confidence and usability for amputees. Methods The system was evaluated for ten forearm amputees and the results were compared with the performance of able-bodied subjects. Multiple sEMG features based on fractal analysis (detrended fluctuation analysis and Higuchi’s fractal dimension combined with traditional magnitude-based features were analyzed. Genetic algorithms and sequential forward selection were used to select the best set of features. Support vector machine (SVM, K-nearest neighbors (KNN and linear discriminant analysis (LDA were analyzed to classify individual finger flexion, hand gestures and different grasps using four electrodes, performing contractions in a natural way to accomplish these tasks. Statistical significance was computed for all the methods using different set of features, for both groups of subjects (able-bodied and amputees. Results The results showed average accuracy up to 99.2% for able-bodied subjects and 98.94% for amputees using SVM, followed very closely by KNN. However, KNN also produces a good performance, as it has a lower computational complexity, which implies an advantage for real-time applications. Conclusion The results show that the method proposed is promising for accurately controlling dexterous prosthetic hands, providing more functionality and better acceptance for amputees.

  6. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  7. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing.

    Science.gov (United States)

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2012-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers' exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  9. 3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2014-02-01

    Full Text Available New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

  10. Character context: a shape descriptor for Arabic handwriting recognition

    Science.gov (United States)

    Mudhsh, Mohammed; Almodfer, Rolla; Duan, Pengfei; Xiong, Shengwu

    2017-11-01

    In the handwriting recognition field, designing good descriptors are substantial to obtain rich information of the data. However, the handwriting recognition research of a good descriptor is still an open issue due to unlimited variation in human handwriting. We introduce a "character context descriptor" that efficiently dealt with the structural characteristics of Arabic handwritten characters. First, the character image is smoothed and normalized, then the character context descriptor of 32 feature bins is built based on the proposed "distance function." Finally, a multilayer perceptron with regularization is used as a classifier. On experimentation with a handwritten Arabic characters database, the proposed method achieved a state-of-the-art performance with recognition rate equal to 98.93% and 99.06% for the 66 and 24 classes, respectively.

  11. Comparative evaluation of dentinal defects induced by hand files, hyflex, protaper next and one shape during canal preparation: A stereomicroscopic study

    Directory of Open Access Journals (Sweden)

    Ekta Garg

    2017-01-01

    Full Text Available Aim: This study aims to evaluate and compare the incidence of dentinal defects induced by Hand Files, HyFlex CM, ProTaper Next (PTN, and One Shape during canal preparation. Materials and Methods: One hundred and fifty extracted mandibular premolar teeth with single root canal were selected. Specimens were then divided into five groups with thirty specimens each. Group I: Specimens were prepared with hand instruments. Group II: Specimens were prepared with HyFlex CM rotary files (Coltene using a crown-down technique according to the manufacturer's instructions. Group III: Specimens were prepared with PTN rotary files (Dentsply using a crown-down technique according to the manufacturer's instructions. Group IV: Specimens were prepared with One Shape Single file rotary system (MicroMega using a crown-down technique according to the manufacturer's instructions. Group V: Specimens were used as a control and left unprepared. All roots were cut horizontally at 3, 6, and 9 mm from the apex. Sections were then viewed under stereomicroscope and dentinal defects were registered as “no defect,” “fracture,” and “other defects.” Statistical Analysis: Results of the study were subjected to Chi-square test. Results: Results were expressed as the number and percentage of defected, partially defected and roots with no defects in each groups. Conclusion: Hand files and One Shape file system caused less root defects compared to PTN and HyFlex file systems.

  12. A New Minimum Trees-Based Approach for Shape Matching with Improved Time Computing: Application to Graphical Symbols Recognition

    Science.gov (United States)

    Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy

    Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.

  13. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    Science.gov (United States)

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.

  14. Arabic sign language recognition based on HOG descriptor

    Science.gov (United States)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  15. A theory of shape identification

    CERN Document Server

    Cao, Frédéric; Morel, Jean-Michel; Musé, Pablo; Sur, Frédéric

    2008-01-01

    Recent years have seen dramatic progress in shape recognition algorithms applied to ever-growing image databases. They have been applied to image stitching, stereo vision, image mosaics, solid object recognition and video or web image retrieval. More fundamentally, the ability of humans and animals to detect and recognize shapes is one of the enigmas of perception. The book describes a complete method that starts from a query image and an image database and yields a list of the images in the database containing shapes present in the query image. A false alarm number is associated to each detection. Many experiments will show that familiar simple shapes or images can reliably be identified with false alarm numbers ranging from 10-5 to less than 10-300. Technically speaking, there are two main issues. The first is extracting invariant shape descriptors from digital images. The second is deciding whether two shape descriptors are identifiable as the same shape or not. A perceptual principle, the Helmholtz princi...

  16. Canonical Skeletons for Shape Matching

    NARCIS (Netherlands)

    Eede, M. van; Macrini, D.; Telea, A.; Sminchisescu, C.; Dickinson, S.

    2006-01-01

    Skeletal representations of 2-D shape, including shock graphs, have become increasingly popular for shape matching and object recognition. However, it is well known that skeletal structure can be unstable under minor boundary deformation, part articulation, and minor shape deformation (due to, for

  17. Modulation of pathogen recognition by autophagy

    Directory of Open Access Journals (Sweden)

    Ji Eun eOh

    2012-03-01

    Full Text Available Autophagy is an ancient biological process for maintaining cellular homeostasis by degradation of long-lived cytosolic proteins and organelles. Recent studies demonstrated that autophagy is availed by immune cells to regulate innate immunity. On the one hand, cells exert direct effector function by degrading intracellular pathogens; on the other hand, autophagy modulates pathogen recognition and downstream signaling for innate immune responses. Pathogen recognition via pattern recognition receptors induces autophagy. The function of phagocytic cells is enhanced by recruitment of autophagy-related proteins. Moreover, autophagy acts as a delivery system for viral replication complexes to migrate to the endosomal compartments where virus sensing occurs. In another case, key molecules of the autophagic pathway have been found to negatively regulate immune signaling, thus preventing aberrant activation of cytokine production and consequent immune responses. In this review, we focus on the recent advances in the role of autophagy in pathogen recognition and modulation of innate immune responses.

  18. Representing Objects using Global 3D Relational Features for Recognition Tasks

    DEFF Research Database (Denmark)

    Mustafa, Wail

    2015-01-01

    representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... robust color description, color calibration is performed. The framework was used in three recognition tasks: object instance recognition, object category recognition, and object spatial relationship recognition. For the object instance recognition task, we present a system that utilizes color and scale...

  19. Wavelet-based moment invariants for pattern recognition

    Science.gov (United States)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  20. Development of a prototype over-actuated biomimetic prosthetic hand.

    Directory of Open Access Journals (Sweden)

    Matthew R Williams

    Full Text Available The loss of a hand can greatly affect quality of life. A prosthetic device that can mimic normal hand function is very important to physical and mental recuperation after hand amputation, but the currently available prosthetics do not fully meet the needs of the amputee community. Most prosthetic hands are not dexterous enough to grasp a variety of shaped objects, and those that are tend to be heavy, leading to discomfort while wearing the device. In order to attempt to better simulate human hand function, a dexterous hand was developed that uses an over-actuated mechanism to form grasp shape using intrinsic joint mounted motors in addition to a finger tendon to produce large flexion force for a tight grip. This novel actuation method allows the hand to use small actuators for grip shape formation, and the tendon to produce high grip strength. The hand was capable of producing fingertip flexion force suitable for most activities of daily living. In addition, it was able to produce a range of grasp shapes with natural, independent finger motion, and appearance similar to that of a human hand. The hand also had a mass distribution more similar to a natural forearm and hand compared to contemporary prosthetics due to the more proximal location of the heavier components of the system. This paper describes the design of the hand and controller, as well as the test results.

  1. A Stochastic Grammar for Natural Shapes

    OpenAIRE

    Felzenszwalb, Pedro F.

    2013-01-01

    We consider object detection using a generic model for natural shapes. A common approach for object recognition involves matching object models directly to images. Another approach involves building intermediate representations via a generic grouping processes. We argue that these two processes (model-based recognition and grouping) may use similar computational mechanisms. By defining a generic model for shapes we can use model-based techniques to implement a mid-level vision grouping process.

  2. First applications of structural pattern recognition methods to the investigation of specific physical phenomena at JET

    International Nuclear Information System (INIS)

    Ratta, G.A.; Vega, J.; Pereira, A.; Portas, A.; Luna, E. de la; Dormido-Canto, S.; Farias, G.; Dormido, R.; Sanchez, J.; Duro, N.; Vargas, H.; Santos, M.; Pajares, G.; Murari, A.

    2008-01-01

    Structural pattern recognition techniques allow the identification of plasma behaviours. Physical properties are encoded in the morphological structure of signals. Intelligent access methods have been applied to JET databases to retrieve data according to physical criteria. On the one hand, the structural form of signals has been used to develop general purpose data retrieval systems to search for both similar entire waveforms and similar structural shapes inside waveforms. On the other hand, domain dependent knowledge was added to the structural information of signals to create particular data retrieval methods for specific physical phenomena. The inclusion of explicit knowledge assists in data analysis. The latter has been applied in JET to look for first, cut-offs in ECE heterodyne radiometer signals and, second, L-H transitions

  3. First applications of structural pattern recognition methods to the investigation of specific physical phenomena at JET

    Energy Technology Data Exchange (ETDEWEB)

    Ratta, G.A. [Asociacion EURATOM/CIEMAT para Fusion (Spain)], E-mail: giuseppe.ratta@ciemat.es; Vega, J.; Pereira, A.; Portas, A.; Luna, E. de la [Asociacion EURATOM/CIEMAT para Fusion (Spain); Dormido-Canto, S.; Farias, G.; Dormido, R.; Sanchez, J.; Duro, N.; Vargas, H. [Dpto. Informatica y Automatica-UNED, 28040 Madrid (Spain); Santos, M.; Pajares, G. [Dpto. Arquitectura de Computadores y Automatica-UCM, 28040 Madrid (Spain); Murari, A. [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, Padua (Italy)

    2008-04-15

    Structural pattern recognition techniques allow the identification of plasma behaviours. Physical properties are encoded in the morphological structure of signals. Intelligent access methods have been applied to JET databases to retrieve data according to physical criteria. On the one hand, the structural form of signals has been used to develop general purpose data retrieval systems to search for both similar entire waveforms and similar structural shapes inside waveforms. On the other hand, domain dependent knowledge was added to the structural information of signals to create particular data retrieval methods for specific physical phenomena. The inclusion of explicit knowledge assists in data analysis. The latter has been applied in JET to look for first, cut-offs in ECE heterodyne radiometer signals and, second, L-H transitions.

  4. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    Science.gov (United States)

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  5. Electromyography data for non-invasive naturally-controlled robotic hand prostheses.

    Science.gov (United States)

    Atzori, Manfredo; Gijsberts, Arjan; Castellini, Claudio; Caputo, Barbara; Hager, Anne-Gabrielle Mittaz; Elsig, Simone; Giatsidis, Giorgio; Bassetto, Franco; Müller, Henning

    2014-01-01

    Recent advances in rehabilitation robotics suggest that it may be possible for hand-amputated subjects to recover at least a significant part of the lost hand functionality. The control of robotic prosthetic hands using non-invasive techniques is still a challenge in real life: myoelectric prostheses give limited control capabilities, the control is often unnatural and must be learned through long training times. Meanwhile, scientific literature results are promising but they are still far from fulfilling real-life needs. This work aims to close this gap by allowing worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark scientific database. The database is targeted at studying the relationship between surface electromyography, hand kinematics and hand forces, with the final goal of developing non-invasive, naturally controlled, robotic hand prostheses. The validation section verifies that the data are similar to data acquired in real-life conditions, and that recognition of different hand tasks by applying state-of-the-art signal features and machine-learning algorithms is possible.

  6. Stimulation over primary motor cortex during action observation impairs effector recognition.

    Science.gov (United States)

    Naish, Katherine R; Barnes, Brittany; Obhi, Sukhvinder S

    2016-04-01

    Recent work suggests that motor cortical processing during action observation plays a role in later recognition of the object involved in the action. Here, we investigated whether recognition of the effector making an action is also impaired when transcranial magnetic stimulation (TMS) - thought to interfere with normal cortical activity - is applied over the primary motor cortex (M1) during action observation. In two experiments, single-pulse TMS was delivered over the hand area of M1 while participants watched short clips of hand actions. Participants were then asked whether an image (experiment 1) or a video (experiment 2) of a hand presented later in the trial was the same or different to the hand in the preceding video. In Experiment 1, we found that participants' ability to recognise static images of hands was significantly impaired when TMS was delivered over M1 during action observation, compared to when no TMS was delivered, or when stimulation was applied over the vertex. Conversely, stimulation over M1 did not affect recognition of dot configurations, or recognition of hands that were previously presented as static images (rather than action movie clips) with no object. In Experiment 2, we found that effector recognition was impaired when stimulation was applied part way through (300ms) and at the end (500ms) of the action observation period, indicating that 200ms of action-viewing following stimulation was not long enough to form a new representation that could be used for later recognition. The findings of both experiments suggest that interfering with cortical motor activity during action observation impairs subsequent recognition of the effector involved in the action, which complements previous findings of motor system involvement in object memory. This work provides some of the first evidence that motor processing during action observation is involved in forming representations of the effector that are useful beyond the action observation period

  7. Eye movements during object recognition in visual agnosia.

    Science.gov (United States)

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  9. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  10. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  11. [Continuous observation of canal aberrations in S-shaped simulated root canal prepared by hand-used ProTaper files].

    Science.gov (United States)

    Xia, Ling-yun; Leng, Wei-dong; Mao, Min; Yang, Guo-biao; Xiang, Yong-gang; Chen, Xin-mei

    2009-08-01

    To observe the formation of canal aberrations in S-shaped root canals prepared by every file of hand-used ProTaper. Fifteen S-shaped simulated resin root canals were selected. Each root canal was prepared by every file of hand-used ProTaper following the manufacturer instruction. The images of canals prepared by S1, S2, F1, F2 and F3 were taken and stored, which were divided into group S1, S2, F1, F2 and F3. One image of canal unprepared was superposed with the images of the same root canal in these five groups respectively to observe the types and number of canal aberrations, which included unprepared area, danger zone, ledge, elbow, zip and perforation. SPSS12.0 software pakage was used for Fisher's exact probabilities in 2x2 table. Unprepared area decreased following preparation by every file of ProTaper, but it still existed when the canal preparation was finished. The incidence of danger zone, elbow and zip in group F1 was 15/15, 11/15, 4/15, respectively, which was significantly higher than that in group S2(2/15,0,0) (PProTaper.The presence of unprepared area suggests that it is essential to rinse canal abundantly during complicated canal preparation and canal antisepsis after preparation.

  12. Constraint Study for a Hand Exoskeleton: Human Hand Kinematics and Dynamics

    Directory of Open Access Journals (Sweden)

    Fai Chen Chen

    2013-01-01

    Full Text Available In the last few years, the number of projects studying the human hand from the robotic point of view has increased rapidly, due to the growing interest in academic and industrial applications. Nevertheless, the complexity of the human hand given its large number of degrees of freedom (DoF within a significantly reduced space requires an exhaustive analysis, before proposing any applications. The aim of this paper is to provide a complete summary of the kinematic and dynamic characteristics of the human hand as a preliminary step towards the development of hand devices such as prosthetic/robotic hands and exoskeletons imitating the human hand shape and functionality. A collection of data and constraints relevant to hand movements is presented, and the direct and inverse kinematics are solved for all the fingers as well as the dynamics; anthropometric data and dynamics equations allow performing simulations to understand the behavior of the finger.

  13. Macrophages recognize size and shape of their targets.

    Directory of Open Access Journals (Sweden)

    Nishit Doshi

    2010-04-01

    Full Text Available Recognition by macrophages is a key process in generating immune response against invading pathogens. Previous studies have focused on recognition of pathogens through surface receptors present on the macrophage's surface. Here, using polymeric particles of different geometries that represent the size and shape range of a variety of bacteria, the importance of target geometry in recognition was investigated. The studies reported here reveal that attachment of particles of different geometries to macrophages exhibits a strong dependence on size and shape. For all sizes and shapes studied, particles possessing the longest dimension in the range of 2-3 microm exhibited highest attachment. This also happens to be the size range of most commonly found bacteria in nature. The surface features of macrophages, in particular the membrane ruffles, might play an important role in this geometry-based target recognition by macrophages. These findings have significant implications in understanding the pathogenicity of bacteria and in designing drug delivery carriers.

  14. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  15. Prolonged disengagement from distractors near the hands

    Directory of Open Access Journals (Sweden)

    Daniel B Vatterott

    2013-08-01

    Full Text Available Because items near our hands are often more important than items far from our hands, the brain processes visual items near our hands differently than items far from our hands. Multiple experiments have attributed this processing difference to spatial attention, but the exact mechanism behind how spatial attention near our hands changes is still under investigation. The current experiments sought to differentiate between two of the proposed mechanisms: a prioritization of the space near the hands and a prolonged disengagement of spatial attention near the hands. To differentiate between these two accounts, we used the additional singleton paradigm in which observers searched for a shape singleton among homogenously shaped distractors. On half the trials, one of the distractors was a different color. Both the prioritization and disengagement accounts predict differently colored distractors near the hands will slow target responses more than differently colored distractors far from the hands, but the prioritization account also predicts faster responses to targets near the hands than far from the hands. The disengagement account does not make this prediction, because attention does not need to be disengaged when the target appears near the hand. We found support for the disengagement account: Salient distractors near the hands slowed responses more than those far from the hands, yet observers did not respond faster to targets near the hands.

  16. Branch length similarity entropy-based descriptors for shape representation

    Science.gov (United States)

    Kwon, Ohsung; Lee, Sang-Hee

    2017-11-01

    In previous studies, we showed that the branch length similarity (BLS) entropy profile could be successfully used for the shape recognition such as battle tanks, facial expressions, and butterflies. In the present study, we proposed new descriptors, roundness, symmetry, and surface roughness, for the recognition, which are more accurate and fast in the computation than the previous descriptors. The roundness represents how closely a shape resembles to a circle, the symmetry characterizes how much one shape is similar with another when the shape is moved in flip, and the surface roughness quantifies the degree of vertical deviations of a shape boundary. To evaluate the performance of the descriptors, we used the database of leaf images with 12 species. Each species consisted of 10 - 20 leaf images and the total number of images were 160. The evaluation showed that the new descriptors successfully discriminated the leaf species. We believe that the descriptors can be a useful tool in the field of pattern recognition.

  17. SURVEY OF BIOMETRIC SYSTEMS USING IRIS RECOGNITION

    OpenAIRE

    S.PON SANGEETHA; DR.M.KARNAN

    2014-01-01

    The security plays an important role in any type of organization in today’s life. Iris recognition is one of the leading automatic biometric systems in the area of security which is used to identify the individual person. Biometric systems include fingerprints, facial features, voice recognition, hand geometry, handwriting, the eye retina and the most secured one presented in this paper, the iris recognition. Biometric systems has become very famous in security systems because it is not possi...

  18. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  19. Pattern recognition applied to infrared images for early alerts in fog

    Science.gov (United States)

    Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien

    2014-09-01

    Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.

  20. Artificial Neural Network Based Optical Character Recognition

    OpenAIRE

    Vivek Shrivastava; Navdeep Sharma

    2012-01-01

    Optical Character Recognition deals in recognition and classification of characters from an image. For the recognition to be accurate, certain topological and geometrical properties are calculated, based on which a character is classified and recognized. Also, the Human psychology perceives characters by its overall shape and features such as strokes, curves, protrusions, enclosures etc. These properties, also called Features are extracted from the image by means of spatial pixel-...

  1. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    Science.gov (United States)

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual

  2. Object recognition in images by human vision and computer vision

    NARCIS (Netherlands)

    Chen, Q.; Dijkstra, J.; Vries, de B.

    2010-01-01

    Object recognition plays a major role in human behaviour research in the built environment. Computer based object recognition techniques using images as input are challenging, but not an adequate representation of human vision. This paper reports on the differences in object shape recognition

  3. Bare-Hand Volume Cracker for Raw Volume Data Analysis

    Directory of Open Access Journals (Sweden)

    Bireswar Laha

    2016-09-01

    Full Text Available Analysis of raw volume data generated from different scanning technologies faces a variety of challenges, related to search, pattern recognition, spatial understanding, quantitative estimation, and shape description. In a previous study, we found that the Volume Cracker (VC 3D interaction (3DI technique mitigated some of these problems, but this result was from a tethered glove-based system with users analyzing simulated data. Here, we redesigned the VC by using untethered bare-hand interaction with real volume datasets, with a broader aim of adoption of this technique in research labs. We developed symmetric and asymmetric interfaces for the Bare-Hand Volume Cracker (BHVC through design iterations with a biomechanics scientist. We evaluated our asymmetric BHVC technique against standard 2D and widely used 3D interaction techniques with experts analyzing scanned beetle datasets. We found that our BHVC design significantly outperformed the other two techniques. This study contributes a practical 3DI design for scientists, documents lessons learned while redesigning for bare-hand trackers, and provides evidence suggesting that 3D interaction could improve volume data analysis for a variety of visual analysis tasks. Our contribution is in the realm of 3D user interfaces tightly integrated with visualization, for improving the effectiveness of visual analysis of volume datasets. Based on our experience, we also provide some insights into hardware-agnostic principles for design of effective interaction techniques.

  4. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  5. The processing of auditory and visual recognition of self-stimuli.

    Science.gov (United States)

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Evaluating persistence of shape information using a matching protocol

    Directory of Open Access Journals (Sweden)

    Ernest Greene

    2018-02-01

    Full Text Available Many laboratories have studied persistence of shape information, the goal being to better understand how the visual system mediates recognition of objects. Most have asked for recognition of known shapes, e.g., letters of the alphabet, or recall from an array. Recognition of known shapes requires access to long-term memory, so it is not possible to know whether the experiment is assessing short-term encoding and working memory mechanisms, or has encountered limitations on retrieval from memory stores. Here we have used an inventory of unknown shapes, wherein a string of discrete dots forms the boundary of each shape. Each was displayed as a target only once to a given respondent, with recognition being tested using a matching task. Analysis based on signal detection theory was used to provide an unbiased estimate of the probability of correct decisions about whether comparison shapes matched target shapes. Four experiments were conducted, which found the following: a Shapes were identified with a high probability of being correct with dot densities ranging from 20% to 4%. Performance dropped only about 10% across this density range. b Shape identification levels remained very high with up to 500 milliseconds of target and comparison shape separation. c With one-at-a-time display of target dots, varying the total time for a given display, the proportion of correct decisions dropped only about 10% even with a total display time of 500 milliseconds. d With display of two complementary target subsets, also varying the total time of each display, there was a dramatic decline of proportion correct that reached chance levels by 500 milliseconds. The greater rate of decline for the two-pulse condition may be due to a mechanism that registers when the number of dots is sufficient to create a shape summary. Once a summary is produced, the temporal window that allows shape information to be added may be more limited.

  7. Hand pose recognition in First Person Vision through graph spectral analysis

    NARCIS (Netherlands)

    Baydoun, Mohamad; Betancourt, Alejandro; Morerio, Pietro; Marcenaro, Lucio; Rauterberg, Matthias; Regazzoni, Carlo

    2017-01-01

    © 2017 IEEE. With the growing availability of wearable technology, video recording devices have become so intimately tied to individuals, that they are able to record the movements of users' hands, making hand-based applications one the most explored area in First Person Vision (FPV). In particular,

  8. Handwriting Moroccan regions recognition using Tifinagh character

    Directory of Open Access Journals (Sweden)

    B. El Kessab

    2015-09-01

    In this context we propose a data set for handwritten Tifinagh regions composed of 1600 image (100 Image for each region. The dataset can be used in one hand to test the efficiency of the Tifinagh region recognition system in extraction of characteristics significatives and the correct identification of each region in classification phase in the other hand.

  9. Online handwritten mathematical expression recognition

    Science.gov (United States)

    Büyükbayrak, Hakan; Yanikoglu, Berrin; Erçil, Aytül

    2007-01-01

    We describe a system for recognizing online, handwritten mathematical expressions. The system is designed with a user-interface for writing scientific articles, supporting the recognition of basic mathematical expressions as well as integrals, summations, matrices etc. A feed-forward neural network recognizes symbols which are assumed to be single-stroke and a recursive algorithm parses the expression by combining neural network output and the structure of the expression. Preliminary results show that writer-dependent recognition rates are very high (99.8%) while writer-independent symbol recognition rates are lower (75%). The interface associated with the proposed system integrates the built-in recognition capabilities of the Microsoft's Tablet PC API for recognizing textual input and supports conversion of hand-drawn figures into PNG format. This enables the user to enter text, mathematics and draw figures in a single interface. After recognition, all output is combined into one LATEX code and compiled into a PDF file.

  10. Mexican sign language recognition using normalized moments and artificial neural networks

    Science.gov (United States)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  11. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic distr...

  12. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  13. A Printed Xi-Shaped Left-Handed Metamaterial on Low-Cost Flexible Photo Paper.

    Science.gov (United States)

    Ashraf, Farhad Bin; Alam, Touhidul; Islam, Mohammad Tariqul

    2017-07-05

    A Xi-shaped meta structure, has been introduced in this paper. A modified split-ring resonator (MSRR) and a capacitive loaded strip (CLS) were used to achieve the left-handed property of the metamaterial. The structure was printed using silver metallic nanoparticle ink, using a very low-cost photo paper as a substrate material. Resonators were inkjet-printed using silver nanoparticle metallic ink on paper to make this metamaterial flexible. It is also free from any kind of chemical waste, which makes it eco-friendly. A double negative region from 8.72 GHz to 10.91 GHz (bandwidth of 2.19 GHz) in the X-band microwave spectra was been found. Figure of merit was also obtained to measure any loss in the double negative region. The simulated result was verified by the performance of the fabricated prototype. The total dimensions of the proposed structure were 0.29 λ × 0.29 λ × 0.007 λ . It is a promising unit cell because of its simplicity, cost-effectiveness, and easy fabrication process.

  14. The Prototype of Indicators of a Responsive Partner Shapes Information Processing: A False Recognition Study.

    Science.gov (United States)

    Turan, Bulent

    2016-01-01

    When judging whether a relationship partner can be counted on to "be there" when needed, people may draw upon knowledge structures to process relevant information. We examined one such knowledge structure using the prototype methodology: indicators of a partner who is likely to be there when needed. In the first study (N = 91), the structure, content, and reliability of the prototype of indicators were examined. Then, using a false recognition study (N = 77), we demonstrated that once activated, the prototype of indicators of a partner who is likely to be there when needed affects information processing. Thus, the prototype of indicators may shape how people process support-relevant information in everyday life, affecting relationship outcomes. Using this knowledge structure may help a person process relevant information quickly and with cognitive economy. However, it may also lead to biases in judgments in certain situations.

  15. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    Science.gov (United States)

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  16. Hand Grasping Synergies As Biometrics.

    Science.gov (United States)

    Patel, Vrajeshri; Thukral, Poojita; Burns, Martin K; Florescu, Ionut; Chandramouli, Rajarathnam; Vinjamuri, Ramana

    2017-01-01

    Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements). Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic). Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies) from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies-postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security.

  17. Hand Grasping Synergies As Biometrics

    Directory of Open Access Journals (Sweden)

    Ramana Vinjamuri

    2017-05-01

    Full Text Available Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements. Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic. Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies—postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security.

  18. A mechatronics platform to study prosthetic hand control using EMG signals.

    Science.gov (United States)

    Geethanjali, P

    2016-09-01

    In this paper, a low-cost mechatronics platform for the design and development of robotic hands as well as a surface electromyogram (EMG) pattern recognition system is proposed. This paper also explores various EMG classification techniques using a low-cost electronics system in prosthetic hand applications. The proposed platform involves the development of a four channel EMG signal acquisition system; pattern recognition of acquired EMG signals; and development of a digital controller for a robotic hand. Four-channel surface EMG signals, acquired from ten healthy subjects for six different movements of the hand, were used to analyse pattern recognition in prosthetic hand control. Various time domain features were extracted and grouped into five ensembles to compare the influence of features in feature-selective classifiers (SLR) with widely considered non-feature-selective classifiers, such as neural networks (NN), linear discriminant analysis (LDA) and support vector machines (SVM) applied with different kernels. The results divulged that the average classification accuracy of the SVM, with a linear kernel function, outperforms other classifiers with feature ensembles, Hudgin's feature set and auto regression (AR) coefficients. However, the slight improvement in classification accuracy of SVM incurs more processing time and memory space in the low-level controller. The Kruskal-Wallis (KW) test also shows that there is no significant difference in the classification performance of SLR with Hudgin's feature set to that of SVM with Hudgin's features along with AR coefficients. In addition, the KW test shows that SLR was found to be better in respect to computation time and memory space, which is vital in a low-level controller. Similar to SVM, with a linear kernel function, other non-feature selective LDA and NN classifiers also show a slight improvement in performance using twice the features but with the drawback of increased memory space requirement and time

  19. Global precedence effects account for individual differences in both face and object recognition performance

    DEFF Research Database (Denmark)

    Gerlach, Christian; Starrfelt, Randi

    2018-01-01

    examine whether global precedence effects, measured by means of non-face stimuli in Navon's paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate...... both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition...

  20. Universal brain systems for recognizing word shapes and handwriting gestures during reading.

    Science.gov (United States)

    Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J L; Dehaene, Stanislas

    2012-12-11

    Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner's area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies.

  1. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    Science.gov (United States)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  2. Sensing human hand motions for controlling dexterous robots

    Science.gov (United States)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  3. Hand based visual intent recognition algorithm for wheelchair motion

    CSIR Research Space (South Africa)

    Luhandjula, T

    2010-05-01

    Full Text Available This paper describes an algorithm for a visual human-machine interface that infers a person’s intention from the motion of the hand. Work in progress shows a proof of concept tested on static images. The context for which this solution is intended...

  4. Temporal hemodynamic classification of two hands tapping using functional near-infrared spectroscopy.

    Science.gov (United States)

    Thanh Hai, Nguyen; Cuong, Ngo Q; Dang Khoa, Truong Q; Van Toi, Vo

    2013-01-01

    In recent decades, a lot of achievements have been obtained in imaging and cognitive neuroscience of human brain. Brain's activities can be shown by a number of different kinds of non-invasive technologies, such as: Near-Infrared Spectroscopy (NIRS), Magnetic Resonance Imaging (MRI), and ElectroEncephaloGraphy (EEG; Wolpaw et al., 2002; Weiskopf et al., 2004; Blankertz et al., 2006). NIRS has become the convenient technology for experimental brain purposes. The change of oxygenation changes (oxy-Hb) along task period depending on location of channel on the cortex has been studied: sustained activation in the motor cortex, transient activation during the initial segments in the somatosensory cortex, and accumulating activation in the frontal lobe (Gentili et al., 2010). Oxy-Hb concentration at the aforementioned sites in the brain can also be used as a predictive factor allows prediction of subject's investigation behavior with a considerable degree of precision (Shimokawa et al., 2009). In this paper, a study of recognition algorithm will be described for recognition whether one taps the left hand (LH) or the right hand (RH). Data with noises and artifacts collected from a multi-channel system will be pre-processed using a Savitzky-Golay filter for getting more smoothly data. Characteristics of the filtered signals during LH and RH tapping process will be extracted using a polynomial regression (PR) algorithm. Coefficients of the polynomial, which correspond to Oxygen-Hemoglobin (Oxy-Hb) concentration, will be applied for the recognition models of hand tapping. Support Vector Machines (SVM) will be applied to validate the obtained coefficient data for hand tapping recognition. In addition, for the objective of comparison, Artificial Neural Networks (ANNs) was also applied to recognize hand tapping side with the same principle. Experimental results have been done many trials on three subjects to illustrate the effectiveness of the proposed method.

  5. Temporal hemodynamic classification of two hands tapping using functional near—infrared spectroscopy

    Science.gov (United States)

    Thanh Hai, Nguyen; Cuong, Ngo Q.; Dang Khoa, Truong Q.; Van Toi, Vo

    2013-01-01

    In recent decades, a lot of achievements have been obtained in imaging and cognitive neuroscience of human brain. Brain's activities can be shown by a number of different kinds of non-invasive technologies, such as: Near-Infrared Spectroscopy (NIRS), Magnetic Resonance Imaging (MRI), and ElectroEncephaloGraphy (EEG; Wolpaw et al., 2002; Weiskopf et al., 2004; Blankertz et al., 2006). NIRS has become the convenient technology for experimental brain purposes. The change of oxygenation changes (oxy-Hb) along task period depending on location of channel on the cortex has been studied: sustained activation in the motor cortex, transient activation during the initial segments in the somatosensory cortex, and accumulating activation in the frontal lobe (Gentili et al., 2010). Oxy-Hb concentration at the aforementioned sites in the brain can also be used as a predictive factor allows prediction of subject's investigation behavior with a considerable degree of precision (Shimokawa et al., 2009). In this paper, a study of recognition algorithm will be described for recognition whether one taps the left hand (LH) or the right hand (RH). Data with noises and artifacts collected from a multi-channel system will be pre-processed using a Savitzky–Golay filter for getting more smoothly data. Characteristics of the filtered signals during LH and RH tapping process will be extracted using a polynomial regression (PR) algorithm. Coefficients of the polynomial, which correspond to Oxygen-Hemoglobin (Oxy-Hb) concentration, will be applied for the recognition models of hand tapping. Support Vector Machines (SVM) will be applied to validate the obtained coefficient data for hand tapping recognition. In addition, for the objective of comparison, Artificial Neural Networks (ANNs) was also applied to recognize hand tapping side with the same principle. Experimental results have been done many trials on three subjects to illustrate the effectiveness of the proposed method. PMID:24032008

  6. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  7. An Efficient Framework for Road Sign Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Duanling Li

    2014-02-01

    Full Text Available Road sign detection and recognition is a significant and challenging issue not only for assisting drivers but also navigating mobile robots. In this paper, we propose a novel and fast approach for the automatic detection and recognition of road signs. First, we use Hue Saturation Intensity (HSI color space to segment the road signs color. And then we locate the road signs based on the geometry symmetry, as almost all the shapes of road sign shapes are symmetrical such circle, rectangle, triangle and octagon. The proposed shape feature is further applied to classify the shape initially. Finally, the road signs are exactly recognized by support vector machine (SVM classifiers. We test our proposed method on real road images and the experimental results show that it can detect and recognize road signs rapidly and accurately.

  8. A survey of visual preprocessing and shape representation techniques

    Science.gov (United States)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  9. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  10. A system of automatic speaker recognition on a minicomputer

    International Nuclear Information System (INIS)

    El Chafei, Cherif

    1978-01-01

    This study describes a system of automatic speaker recognition using the pitch of the voice. The pre-treatment consists in the extraction of the speakers' discriminating characteristics taken from the pitch. The programme of recognition gives, firstly, a preselection and then calculates the distance between the speaker's characteristics to be recognized and those of the speakers already recorded. An experience of recognition has been realized. It has been undertaken with 15 speakers and included 566 tests spread over an intermittent period of four months. The discriminating characteristics used offer several interesting qualities. The algorithms concerning the measure of the characteristics on one hand, the speakers' classification on the other hand, are simple. The results obtained in real time with a minicomputer are satisfactory. Furthermore they probably could be improved if we considered other speaker's discriminating characteristics but this was unfortunately not in our possibilities. (author) [fr

  11. Robust 3D Face Recognition in the Presence of Realistic Occlusions

    NARCIS (Netherlands)

    Alyuz, Nese; Gökberk, B.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Akarun, Lale

    2012-01-01

    Facial occlusions pose significant problems for automatic face recognition systems. In this work, we propose a novel occlusion-resistant three-dimensional (3D) facial identification system. We show that, under extreme occlusions due to hair, hands, and eyeglasses, typical 3D face recognition systems

  12. Unconstrained and contactless hand geometry biometrics.

    Science.gov (United States)

    de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier

    2011-01-01

    This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.

  13. Unconstrained and Contactless Hand Geometry Biometrics

    Directory of Open Access Journals (Sweden)

    Carmen Sánchez-Ávila

    2011-10-01

    Full Text Available This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM and k-Nearest Neighbour (k-NN. Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.

  14. Evaluating EMG Feature and Classifier Selection for Application to Partial-Hand Prosthesis Control

    Directory of Open Access Journals (Sweden)

    Adenike A. Adewuyi

    2016-10-01

    Full Text Available Pattern recognition-based myoelectric control of upper limb prostheses has the potential to restore control of multiple degrees of freedom. Though this control method has been extensively studied in individuals with higher-level amputations, few studies have investigated its effectiveness for individuals with partial-hand amputations. Most partial-hand amputees retain a functional wrist and the ability of pattern recognition-based methods to correctly classify hand motions from different wrist positions is not well studied. In this study, focusing on partial-hand amputees, we evaluate (1 the performance of non-linear and linear pattern recognition algorithms and (2 the performance of optimal EMG feature subsets for classification of four hand motion classes in different wrist positions for 16 non-amputees and 4 amputees. Our results show that linear discriminant analysis and linear and non-linear artificial neural networks perform significantly better than the quadratic discriminant analysis for both non-amputees and partial-hand amputees. For amputees, including information from multiple wrist positions significantly decreased error (p<0.001 but no further significant decrease in error occurred when more than 4, 2, or 3 positions were included for the extrinsic (p=0.07, intrinsic (p=0.06, or combined extrinsic and intrinsic muscle EMG (p=0.08, respectively. Finally, we found that a feature set determined by selecting optimal features from each channel outperformed the commonly used time domain (p<0.001 and time domain/autoregressive feature sets (p<0.01. This method can be used as a screening filter to select the features from each channel that provide the best classification of hand postures across different wrist positions.

  15. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    Science.gov (United States)

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  16. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    Directory of Open Access Journals (Sweden)

    Xin Li

    2014-06-01

    Full Text Available Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians, especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.

  17. Modeling Self-Occlusions/Disocclusions in Dynamic Shape and Appearance Tracking for Obtaining Precise Shape

    KAUST Repository

    Yang, Yanchao

    2013-01-01

    We present a method to determine the precise shape of a dynamic object from video. This problem is fundamental to computer vision, and has a number of applications, for example, 3D video/cinema post-production, activity recognition and augmented

  18. Possibility of object recognition using Altera's model based design approach

    International Nuclear Information System (INIS)

    Tickle, A J; Harvey, P K; Smith, J S; Wu, F

    2009-01-01

    Object recognition is an image processing task of finding a given object in a selected image or video sequence. Object recognition can be divided into two areas: one of these is decision-theoretic and deals with patterns described by quantitative descriptors, for example such as length, area, shape and texture. With this Graphical User Interface Circuitry (GUIC) methodology employed here being relatively new for object recognition systems, the aim of this work is to identify if the developed circuitry can detect certain shapes or strings within the target image. A much smaller reference image feeds the preset data for identification, tests are conducted for both binary and greyscale and the additional mathematical morphology to highlight the area within the target image with the object(s) are located is also presented. This then provides proof that basic recognition methods are valid and would allow the progression to developing decision-theoretical and learning based approaches using GUICs for use in multidisciplinary tasks.

  19. Support vector machine for automatic pain recognition

    Science.gov (United States)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  20. FORMS OF HAND IN SIGN LANGUAGE IN BOSNIA AND HERZEGOVINA

    Directory of Open Access Journals (Sweden)

    Husnija Hasanbegović

    2013-05-01

    Full Text Available Sign in sign language, equivalent to the word, phrase or a sentence in the oral-language, can be divided in linguistic units of lower levels: shape of the hand, place of articulation, type of movement and orientation of the palm. The first description of these units, which today is present and applicable in Bosnia and Herzegovina (B&H, was given by Zimmerman in 1986, who found 27 shapes of hand, while other types were not systematically developed or described. The target of this study was to determine the possible existence of other forms of hand movements present in sign language in B&H. By the method of content analysis, the 425 analyzed signs in sign launguage in B&H, confirmed their existence, but we also discovered and presented 14 new shapes of the hand. This way, we confirmed the need of implementing a detailed research, standardization and publishing of sign language in B&H, which would provide adequate conditions for its study and application, as for the deaf, and all the others who come into direct contact with them.

  1. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones.

    Science.gov (United States)

    Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E

    2016-08-20

    As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user's daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.

  2. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    Directory of Open Access Journals (Sweden)

    Seongwan Kim

    2017-01-01

    Full Text Available The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor, usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  3. Target recognition of log-polar ladar range images using moment invariants

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  4. Advanced Myoelectric Control for Robotic Hand-Assisted Training: Outcome from a Stroke Patient.

    Science.gov (United States)

    Lu, Zhiyuan; Tong, Kai-Yu; Shin, Henry; Li, Sheng; Zhou, Ping

    2017-01-01

    A hand exoskeleton driven by myoelectric pattern recognition was designed for stroke rehabilitation. It detects and recognizes the user's motion intent based on electromyography (EMG) signals, and then helps the user to accomplish hand motions in real time. The hand exoskeleton can perform six kinds of motions, including the whole hand closing/opening, tripod pinch/opening, and the "gun" sign/opening. A 52-year-old woman, 8 months after stroke, made 20× 2-h visits over 10 weeks to participate in robot-assisted hand training. Though she was unable to move her fingers on her right hand before the training, EMG activities could be detected on her right forearm. In each visit, she took 4× 10-min robot-assisted training sessions, in which she repeated the aforementioned six motion patterns assisted by our intent-driven hand exoskeleton. After the training, her grip force increased from 1.5 to 2.7 kg, her pinch force increased from 1.5 to 2.5 kg, her score of Box and Block test increased from 3 to 7, her score of Fugl-Meyer (Part C) increased from 0 to 7, and her hand function increased from Stage 1 to Stage 2 in Chedoke-McMaster assessment. The results demonstrate the feasibility of robot-assisted training driven by myoelectric pattern recognition after stroke.

  5. Shape Memory Polymers: A Joint Chemical and Materials Engineering Hands-On Experience

    Science.gov (United States)

    Seif, Mujan; Beck, Matthew

    2018-01-01

    Hands-on experiences are excellent tools for increasing retention of first year engineering students. They also encourage interdisciplinary collaboration, a critical skill for modern engineers. In this paper, we describe and evaluate a joint Chemical and Materials Engineering hands-on lab that explores cross-linking and glass transition in…

  6. SIFT Based Vein Recognition Models: Analysis and Improvement

    Directory of Open Access Journals (Sweden)

    Guoqing Wang

    2017-01-01

    Full Text Available Scale-Invariant Feature Transform (SIFT is being investigated more and more to realize a less-constrained hand vein recognition system. Contrast enhancement (CE, compensating for deficient dynamic range aspects, is a must for SIFT based framework to improve the performance. However, evidence of negative influence on SIFT matching brought by CE is analysed by our experiments. We bring evidence that the number of extracted keypoints resulting by gradient based detectors increases greatly with different CE methods, while on the other hand the matching result of extracted invariant descriptors is negatively influenced in terms of Precision-Recall (PR and Equal Error Rate (EER. Rigorous experiments with state-of-the-art and other CE adopted in published SIFT based hand vein recognition system demonstrate the influence. What is more, an improved SIFT model by importing the kernel of RootSIFT and Mirror Match Strategy into a unified framework is proposed to make use of the positive keypoints change and make up for the negative influence brought by CE.

  7. Integration trumps selection in object recognition

    Science.gov (United States)

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  8. Integration trumps selection in object recognition.

    Science.gov (United States)

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Data complexity in pattern recognition

    CERN Document Server

    Kam Ho Tin

    2006-01-01

    Machines capable of automatic pattern recognition have many fascinating uses. Algorithms for supervised classification, where one infers a decision boundary from a set of training examples, are at the core of this capability. This book looks at data complexity and its role in shaping the theories and techniques in different disciplines

  10. Face Recognition Is Shaped by the Use of Sign Language

    Science.gov (United States)

    Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier

    2018-01-01

    Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…

  11. Occupational hand eczema and/or contact urticaria

    DEFF Research Database (Denmark)

    Carøe, Tanja K; Ebbehøj, Niels E; Bonde, Jens P

    2018-01-01

    BACKGROUND: Occupational hand eczema and/or contact urticaria may have social consequences such as change of profession or not remaining in the workforce. OBJECTIVES: To identify factors associated with job change in a cohort of participants with recognised occupational hand eczema....../contact urticaria METHODS: A registry-based study including 2703 employees with recognised occupational hand eczema/contact urticaria in Denmark in 2010/2011. Four to five years later the participants received a follow-up questionnaire, comprising questions on current job situation (response rate 58.0%). RESULTS...... to specific professions, cleaning personnel changed profession significantly more often than other workers [71.4% (OR = 2.26)], health care workers significantly less often than other workers [34.0% (OR = 0.36)]. CONCLUSION: Job change occurs frequently during the first years after recognition of occupational...

  12. Haar-like Rectangular Features for Biometric Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rashidi, Maryam

    2013-01-01

    Developing a reliable, fast, and robust biometric recognition system is still a challenging task. This is because the inputs to these systems can be noisy, occluded, poorly illuminated, rotated, and of very low-resolutions. This paper proposes a probabilistic classifier using Haar-like features......, which mostly have been used for detection, for biometric recognition. The proposed system has been tested for three different biometrics: ear, iris, and hand vein patterns and it is shown that it is robust against most of the mentioned degradations and it outperforms state-of-the-art systems...

  13. Modeling the shape hierarchy for visually guided grasping

    CSIR Research Space (South Africa)

    Rezai, O

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient...

  14. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones

    Directory of Open Access Journals (Sweden)

    Hansong Guo

    2016-08-01

    Full Text Available As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.

  15. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones †

    Science.gov (United States)

    Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E

    2016-01-01

    As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy. PMID:27556461

  16. Investigating the Impact of Possession-Way of a Smartphone on Action Recognition

    Directory of Open Access Journals (Sweden)

    Zae Myung Kim

    2016-06-01

    Full Text Available For the past few decades, action recognition has been attracting many researchers due to its wide use in a variety of applications. Especially with the increasing number of smartphone users, many studies have been conducted using sensors within a smartphone. However, a lot of these studies assume that the users carry the device in specific ways such as by hand, in a pocket, in a bag, etc. This paper investigates the impact of providing an action recognition system with the information of the possession-way of a smartphone, and vice versa. The experimental dataset consists of five possession-ways (hand, backpack, upper-pocket, lower-pocket, and shoulder-bag and two actions (walking and running gathered by seven users separately. Various machine learning models including recurrent neural network architectures are employed to explore the relationship between the action recognition and the possession-way recognition. The experimental results show that the assumption of possession-ways of smartphones do affect the performance of action recognition, and vice versa. The results also reveal that a good performance is achieved when both actions and possession-ways are recognized simultaneously.

  17. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  18. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  19. Language comprehenders retain implied shape and orientation of objects.

    Science.gov (United States)

    Pecher, Diane; van Dantzig, Saskia; Zwaan, Rolf A; Zeelenberg, René

    2009-06-01

    According to theories of embodied cognition, language comprehenders simulate sensorimotor experiences to represent the meaning of what they read. Previous studies have shown that picture recognition is better if the object in the picture matches the orientation or shape implied by a preceding sentence. In order to test whether strategic imagery may explain previous findings, language comprehenders first read a list of sentences in which objects were mentioned. Only once the complete list had been read was recognition memory tested with pictures. Recognition performance was better if the orientation or shape of the object matched that implied by the sentence, both immediately after reading the complete list of sentences and after a 45-min delay. These results suggest that previously found match effects were not due to strategic imagery and show that details of sensorimotor simulations are retained over longer periods.

  20. A real-time vision-based hand gesture interaction system for virtual EAST

    International Nuclear Information System (INIS)

    Wang, K.R.; Xiao, B.J.; Xia, J.Y.; Li, Dan; Luo, W.L.

    2016-01-01

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  1. A real-time vision-based hand gesture interaction system for virtual EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.R., E-mail: wangkr@mail.ustc.edu.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J.; Xia, J.Y. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Luo, W.L. [709th Research Institute, Shipbuilding Industry Corporation (China)

    2016-11-15

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  2. Object and event recognition for stroke rehabilitation

    Science.gov (United States)

    Ghali, Ahmed; Cunningham, Andrew S.; Pridmore, Tony P.

    2003-06-01

    Stroke is a major cause of disability and health care expenditure around the world. Existing stroke rehabilitation methods can be effective but are costly and need to be improved. Even modest improvements in the effectiveness of rehabilitation techniques could produce large benefits in terms of quality of life. The work reported here is part of an ongoing effort to integrate virtual reality and machine vision technologies to produce innovative stroke rehabilitation methods. We describe a combined object recognition and event detection system that provides real time feedback to stroke patients performing everyday kitchen tasks necessary for independent living, e.g. making a cup of coffee. The image plane position of each object, including the patient"s hand, is monitored using histogram-based recognition methods. The relative positions of hand and objects are then reported to a task monitor that compares the patient"s actions against a model of the target task. A prototype system has been constructed and is currently undergoing technical and clinical evaluation.

  3. Efficient Interaction Recognition through Positive Action Representation

    Directory of Open Access Journals (Sweden)

    Tao Hu

    2013-01-01

    Full Text Available This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI, including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority.

  4. Indoor sign recognition for the blind

    CSIR Research Space (South Africa)

    Kunene, D

    2016-09-01

    Full Text Available that is faster and more reliable. We first segment the signs by colour, and then by shape recognition. The sign-type classification is done using a tree search structure that enables the use of iterative contour descriptors like the speeded-up-robust features...

  5. Global precedence effects account for individual differences in both face and object recognition performance.

    Science.gov (United States)

    Gerlach, Christian; Starrfelt, Randi

    2018-03-20

    There has been an increase in studies adopting an individual difference approach to examine visual cognition and in particular in studies trying to relate face recognition performance with measures of holistic processing (the face composite effect and the part-whole effect). In the present study we examine whether global precedence effects, measured by means of non-face stimuli in Navon's paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition is related to the magnitude of the global precedence effect.

  6. A Control Strategy with Tactile Perception Feedback for EMG Prosthetic Hand

    Directory of Open Access Journals (Sweden)

    Changcheng Wu

    2015-01-01

    Full Text Available To improve the control effectiveness and make the prosthetic hand not only controllable but also perceivable, an EMG prosthetic hand control strategy was proposed in this paper. The control strategy consists of EMG self-learning motion recognition, backstepping controller with stiffness fuzzy observation, and force tactile representation. EMG self-learning motion recognition is used to reduce the influence on EMG signals caused by the uncertainty of the contacting position of the EMG sensors. Backstepping controller with stiffness fuzzy observation is used to realize the position control and grasp force control. Velocity proportional control in free space and grasp force tracking control in restricted space can be realized by the same controller. The force tactile representation helps the user perceive the states of the prosthetic hand. Several experiments were implemented to verify the effect of the proposed control strategy. The results indicate that the proposed strategy has effectiveness. During the experiments, the comments of the participants show that the proposed strategy is a better choice for amputees because of the improved controllability and perceptibility.

  7. Invariant Face recognition Using Infrared Images

    International Nuclear Information System (INIS)

    Zahran, E.G.

    2012-01-01

    Over the past few decades, face recognition has become a rapidly growing research topic due to the increasing demands in many applications of our daily life such as airport surveillance, personal identification in law enforcement, surveillance systems, information safety, securing financial transactions, and computer security. The objective of this thesis is to develop a face recognition system capable of recognizing persons with a high recognition capability, low processing time, and under different illumination conditions, and different facial expressions. The thesis presents a study for the performance of the face recognition system using two techniques; the Principal Component Analysis (PCA), and the Zernike Moments (ZM). The performance of the recognition system is evaluated according to several aspects including the recognition rate, and the processing time. Face recognition systems that use visual images are sensitive to variations in the lighting conditions and facial expressions. The performance of these systems may be degraded under poor illumination conditions or for subjects of various skin colors. Several solutions have been proposed to overcome these limitations. One of these solutions is to work in the Infrared (IR) spectrum. IR images have been suggested as an alternative source of information for detection and recognition of faces, when there is little or no control over lighting conditions. This arises from the fact that these images are formed due to thermal emissions from skin, which is an intrinsic property because these emissions depend on the distribution of blood vessels under the skin. On the other hand IR face recognition systems still have limitations with temperature variations and recognition of persons wearing eye glasses. In this thesis we will fuse IR images with visible images to enhance the performance of face recognition systems. Images are fused using the wavelet transform. Simulation results show that the fusion of visible and

  8. Identification of motion from multi-channel EMG signals for control of prosthetic hand

    International Nuclear Information System (INIS)

    Geethanjali, P.; Ray, K.K.

    2011-01-01

    Full text: The authors in this paper propose an effective and efficient pattern recognition technique from four channel electromyogram (EMG) signals for control of multifunction prosthetic hand. Time domain features such as mean absolute value, number of zero crossings, number of slope sign changes and waveform length are considered for pattern recognition. The patterns are classified using simple logistic regression (SLR) technique and decision tree (DT) using J48 algorithm. In this study six specific hand and wrist motions are identified from the EMG signals obtained from ten different able-bodied. By considering relevant dominant features for pattern recognition, the processing time as well as memory space of the SLR and DT classifiers is found to be less in comparison with neural network (NN), k-nearest neighbour model 1 (kNN Model-1), k-nearest neighbour model 2 (kNN-Model-2) and linear discriminant analysis. The classification accuracy of SLR classifier is found to be 91 ± 1.9%. (author)

  9. Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network.

    Science.gov (United States)

    Sun, Xin; Qian, Huinan

    2016-01-01

    Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN) for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin.

  10. Transformations in the Recognition of Visual Forms

    Science.gov (United States)

    Charness, Neil; Bregman, Albert S.

    1973-01-01

    In a study which required college students to learn to recognize four flexible plastic shapes photographed on different backgrounds from different angles, the importance of a context-rich environment for the learning and recognition of visual patterns was illustrated. (Author)

  11. Institutional Choice and Recognition in Development

    DEFF Research Database (Denmark)

    Rutt, Rebecca Leigh

    Abstract This thesis concerns the role of local institutions in fostering development including natural resource management, and how this role is shaped by relations with higher scale institutions such as development agencies and national governments. Specifically, it examines the choice of local...... objective of this thesis was to contribute to understanding processes and outcomes of institutional choice and recognition. It employed mixed methods but primarily semi structured interviews in multiple sites across Nepal. In responding to specific objectives, namely to better understand: i) the rationales...... behind choices of local institutional counterparts, ii) the belonging and citizenship available with local institutions, iii) the dynamics and mutuality of recognition between higher and lower scale institutions, and iv) the social outcomes of choice and recognition, this thesis shows that the way choice...

  12. Fundamental geodesic deformations in spaces of treelike shapes

    DEFF Research Database (Denmark)

    Feragen, Aasa; Lauze, Francois Bernard; Nielsen, Mads

    2010-01-01

    This paper presents a new geometric framework for analysis of planar treelike shapes for applications such as shape matching, recognition and morphology, using the geometry of the space of treelike shapes. Mathematically, the shape space is given the structure of a stratified set which...... is a quotient of a normed vector space with a metric inherited from the vector space norm. We give examples of geodesic paths in tree-space corresponding to fundamental deformations of small trees, and discuss how these deformations are key building blocks for understanding deformations between larger trees....

  13. Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory

    Directory of Open Access Journals (Sweden)

    Jeffrey S. Johnson

    2011-06-01

    Full Text Available Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (~8-14 Hz power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task-relevance of shape information was systematically manipulated across trial blocks and EEG was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal delay-period alpha-band power in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape-location associations in short-term memory.

  14. Cognitive object recognition system (CORS)

    Science.gov (United States)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  15. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  16. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  17. Vibrotactile feedback for conveying object shape information as perceived by artificial sensing of robotic arm.

    Science.gov (United States)

    Khasnobish, Anwesha; Pal, Monalisa; Sardar, Dwaipayan; Tibarewala, D N; Konar, Amit

    2016-08-01

    This work is a preliminary study towards developing an alternative communication channel for conveying shape information to aid in recognition of items when tactile perception is hindered. Tactile data, acquired during object exploration by sensor fitted robot arm, are processed to recognize four basic geometric shapes. Patterns representing each shape, classified from tactile data, are generated using micro-controller-driven vibration motors which vibrotactually stimulate users to convey the particular shape information. These motors are attached on the subject's arm and their psychological (verbal) responses are recorded to assess the competence of the system to convey shape information to the user in form of vibrotactile stimulations. Object shapes are classified from tactile data with an average accuracy of 95.21 %. Three successive sessions of shape recognition from vibrotactile pattern depicted learning of the stimulus from subjects' psychological response which increased from 75 to 95 %. This observation substantiates the learning of vibrotactile stimulation in user over the sessions which in turn increase the system efficacy. The tactile sensing module and vibrotactile pattern generating module are integrated to complete the system whose operation is analysed in real-time. Thus, the work demonstrates a successful implementation of the complete schema of artificial tactile sensing system for object-shape recognition through vibrotactile stimulations.

  18. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert

    2016-01-01

    geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...... performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat...

  19. [A case with apraxia of tool use: selective inability to form a hand posture for a tool].

    Science.gov (United States)

    Hayakawa, Yuko; Fujii, Toshikatsu; Yamadori, Atsushi; Meguro, Kenichi; Suzuki, Kyoko

    2015-03-01

    Impaired tool use is recognized as a symptom of ideational apraxia. While many studies have focused on difficulties in producing gestures as a whole, using tools involves several steps; these include forming hand postures appropriate for the use of certain tool, selecting objects or body parts to act on, and producing gestures. In previously reported cases, both producing and recognizing hand postures were impaired. Here we report the first case showing a selective impairment of forming hand postures appropriate for tools with preserved recognition of the required hand postures. A 24-year-old, right-handed man was admitted to hospital because of sensory impairment of the right side of the body, mild aphasia, and impaired tool use due to left parietal subcortical hemorrhage. His ability to make symbolic gestures, copy finger postures, and orient his hand to pass a slit was well preserved. Semantic knowledge for tools and hand postures was also intact. He could flawlessly select the correct hand postures in recognition tasks. He only demonstrated difficulties in forming a hand posture appropriate for a tool. Once he properly grasped a tool by trial and error, he could use it without hesitation. These observations suggest that each step of tool use should be thoroughly examined in patients with ideational apraxia.

  20. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  1. Face recognition from unconstrained three-dimensional face images using multitask sparse representation

    Science.gov (United States)

    Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar

    2018-01-01

    We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.

  2. Pattern recognition and modelling of earthquake registrations with interactive computer support

    International Nuclear Information System (INIS)

    Manova, Katarina S.

    2004-01-01

    The object of the thesis is Pattern Recognition. Pattern recognition i.e. classification, is applied in many fields: speech recognition, hand printed character recognition, medical analysis, satellite and aerial-photo interpretations, biology, computer vision, information retrieval and so on. In this thesis is studied its applicability in seismology. Signal classification is an area of great importance in a wide variety of applications. This thesis deals with the problem of (automatic) classification of earthquake signals, which are non-stationary signals. Non-stationary signal classification is an area of active research in the signal and image processing community. The goal of the thesis is recognition of earthquake signals according to their epicentral zone. Source classification i.e. recognition is based on transformation of seismograms (earthquake registrations) to images, via time-frequency transformations, and applying image processing and pattern recognition techniques for feature extraction, classification and recognition. The tested data include local earthquakes from seismic regions in Macedonia. By using actual seismic data it is shown that proposed methods provide satisfactory results for classification and recognition.(Author)

  3. Relating the Content and Confidence of Recognition Judgments

    Science.gov (United States)

    Selmeczy, Diana; Dobbins, Ian G.

    2014-01-01

    The Remember/Know procedure, developed by Tulving (1985) to capture the distinction between the conscious correlates of episodic and semantic retrieval, has spawned considerable research and debate. However, only a handful of reports have examined the recognition content beyond this dichotomous simplification. To address this, we collected…

  4. Implementation of perceptual aspects in a face recognition algorithm

    International Nuclear Information System (INIS)

    Crenna, F; Bovio, L; Rossi, G B; Zappa, E; Testa, R; Gasparetto, M

    2013-01-01

    Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points

  5. Postura da mão e imagética motora: um estudo sobre reconhecimento de partes do corpo Hand posture and motor imagery: a body-part recognition study

    Directory of Open Access Journals (Sweden)

    AP Lameira

    2008-10-01

    Full Text Available OBJETIVOS: Assim como a imagética motora, o reconhecimento de partes do corpo aciona representações somatosensoriais específicas. Essas representações são ativadas implicitamente para comparar o corpo com o estímulo. No presente estudo, investigou-se a influência da informação proprioceptiva da postura no reconhecimento de partes do corpo (mãos e propõe-se a utilização dessa tarefa na reabilitação de pacientes neurológicos. MATERIAIS E MÉTODOS: Dez voluntários destros participaram do experimento. A tarefa era reconhecer a lateralidade de figuras da mão apresentada, em várias perspectivas e em vários ângulos de orientação. Para a figura da mão direita, o voluntário pressionava a tecla direita e para a figura da mão esquerda, a tecla esquerda. Os voluntários realizavam duas sessões: uma com as mãos na postura prona e outra com as mãos na postura supina. RESULTADOS: Os tempos de reação manual (TRM eram maiores para as vistas e orientações, nas quais é difícil realizar o movimento real, mostrando que durante a tarefa, existe um acionamento de representações motoras para comparar o corpo com o estímulo. Além disso, existe uma influência da postura do sujeito em vistas e ângulos específicos. CONCLUSÕES: Estes resultados mostram que representações motoras são ativadas para comparar o corpo com o estímulo e que a postura da mão influencia esta ressonância entre estímulo e parte do corpo.OBJECTIVE: Recognition of body parts activates specific somatosensory representations in a way that is similar to motor imagery. These representations are implicitly activated to compare the body with the stimulus. In the present study, we investigate the influence of proprioceptive information relating to body posture on the recognition of body parts (hands. It proposes that this task could be used for rehabilitation of neurological patients. METHODS: Ten right-handed volunteers participated in this experiment. The

  6. Human motion sensing and recognition a fuzzy qualitative approach

    CERN Document Server

    Liu, Honghai; Ji, Xiaofei; Chan, Chee Seng; Khoury, Mehdi

    2017-01-01

    This book introduces readers to the latest exciting advances in human motion sensing and recognition, from the theoretical development of fuzzy approaches to their applications. The topics covered include human motion recognition in 2D and 3D, hand motion analysis with contact sensors, and vision-based view-invariant motion recognition, especially from the perspective of Fuzzy Qualitative techniques. With the rapid development of technologies in microelectronics, computers, networks, and robotics over the last decade, increasing attention has been focused on human motion sensing and recognition in many emerging and active disciplines where human motions need to be automatically tracked, analyzed or understood, such as smart surveillance, intelligent human-computer interaction, robot motion learning, and interactive gaming. Current challenges mainly stem from the dynamic environment, data multi-modality, uncertain sensory information, and real-time issues. These techniques are shown to effectively address the ...

  7. Specificity and affinity quantification of flexible recognition from underlying energy landscape topography.

    Directory of Open Access Journals (Sweden)

    Xiakun Chu

    2014-08-01

    Full Text Available Flexibility in biomolecular recognition is essential and critical for many cellular activities. Flexible recognition often leads to moderate affinity but high specificity, in contradiction with the conventional wisdom that high affinity and high specificity are coupled. Furthermore, quantitative understanding of the role of flexibility in biomolecular recognition is still challenging. Here, we meet the challenge by quantifying the intrinsic biomolecular recognition energy landscapes with and without flexibility through the underlying density of states. We quantified the thermodynamic intrinsic specificity by the topography of the intrinsic binding energy landscape and the kinetic specificity by association rate. We found that the thermodynamic and kinetic specificity are strongly correlated. Furthermore, we found that flexibility decreases binding affinity on one hand, but increases binding specificity on the other hand, and the decreasing or increasing proportion of affinity and specificity are strongly correlated with the degree of flexibility. This shows more (less flexibility leads to weaker (stronger coupling between affinity and specificity. Our work provides a theoretical foundation and quantitative explanation of the previous qualitative studies on the relationship among flexibility, affinity and specificity. In addition, we found that the folding energy landscapes are more funneled with binding, indicating that binding helps folding during the recognition. Finally, we demonstrated that the whole binding-folding energy landscapes can be integrated by the rigid binding and isolated folding energy landscapes under weak flexibility. Our results provide a novel way to quantify the affinity and specificity in flexible biomolecular recognition.

  8. Radiological evaluation of the morphological changes of root canals shaped with ProTaper for hand use and the ProTaper and RaCe rotary instruments.

    Science.gov (United States)

    Aguiar, Carlos M; Câmara, Andréa C

    2008-12-01

    This study evaluated, by means of the radiography examination, the occurrence of deviations in the apical third of root canals shaped with hand and rotary instruments. Sixty mandibular human molars were divided into three groups. The root canals in group 1 were instrumented with ProTaper (Dentsply/Maillefer, Ballaigues, Switzerland) for hand use, group 2 with ProTaper and group 3 with RaCe. The images obtained by double superimposition of the pre- and postoperative radiographs were evaluated by two endodontists with the aid of a magnifier-viewer and a fivefold magnifier. Statistical analysis was performed using the Fisher-Freeman-Halton. The instrumentation using the ProTaper for hand use showed 25% of the canals with a deviation in the apical third, as did the ProTaper, while the corresponding figure for the RaCe (FKG Dentaire, La-Chaux-de-Fonds, Switzerland) was 20%, but these results were not statistically significant. There was no correlation between the occurrence of deviations in the apical third and the systems used.

  9. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    Science.gov (United States)

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  10. Shape-specific perceptual learning in a figure-ground segregation task.

    Science.gov (United States)

    Yi, Do-Joon; Olson, Ingrid R; Chun, Marvin M

    2006-03-01

    What does perceptual experience contribute to figure-ground segregation? To study this question, we trained observers to search for symmetric dot patterns embedded in random dot backgrounds. Training improved shape segmentation, but learning did not completely transfer either to untrained locations or to untrained shapes. Such partial specificity persisted for a month after training. Interestingly, training on shapes in empty backgrounds did not help segmentation of the trained shapes in noisy backgrounds. Our results suggest that perceptual training increases the involvement of early sensory neurons in the segmentation of trained shapes, and that successful segmentation requires perceptual skills beyond shape recognition alone.

  11. Category-specificity in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been...... demonstrated in neurologically intact subjects, but the findings are contradictory and there is no agreement as to why category-effects arise. This article presents a Pre-semantic Account of Category Effects (PACE) in visual object recognition. PACE assumes two processing stages: shape configuration (the...... binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies...

  12. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  13. Bio-recognitive photonics of a DNA-guided organic semiconductor

    Science.gov (United States)

    Back, Seung Hyuk; Park, Jin Hyuk; Cui, Chunzhi; Ahn, Dong June

    2016-01-01

    Incorporation of duplex DNA with higher molecular weights has attracted attention for a new opportunity towards a better organic light-emitting diode (OLED) capability. However, biological recognition by OLED materials is yet to be addressed. In this study, specific oligomeric DNA-DNA recognition is successfully achieved by tri (8-hydroxyquinoline) aluminium (Alq3), an organic semiconductor. Alq3 rods crystallized with guidance from single-strand DNA molecules show, strikingly, a unique distribution of the DNA molecules with a shape of an `inverted' hourglass. The crystal's luminescent intensity is enhanced by 1.6-fold upon recognition of the perfect-matched target DNA sequence, but not in the case of a single-base mismatched one. The DNA-DNA recognition forming double-helix structure is identified to occur only in the rod's outer periphery. This study opens up new opportunities of Alq3, one of the most widely used OLED materials, enabling biological recognition.

  14. Bio-recognitive photonics of a DNA-guided organic semiconductor.

    Science.gov (United States)

    Back, Seung Hyuk; Park, Jin Hyuk; Cui, Chunzhi; Ahn, Dong June

    2016-01-04

    Incorporation of duplex DNA with higher molecular weights has attracted attention for a new opportunity towards a better organic light-emitting diode (OLED) capability. However, biological recognition by OLED materials is yet to be addressed. In this study, specific oligomeric DNA-DNA recognition is successfully achieved by tri (8-hydroxyquinoline) aluminium (Alq3), an organic semiconductor. Alq3 rods crystallized with guidance from single-strand DNA molecules show, strikingly, a unique distribution of the DNA molecules with a shape of an 'inverted' hourglass. The crystal's luminescent intensity is enhanced by 1.6-fold upon recognition of the perfect-matched target DNA sequence, but not in the case of a single-base mismatched one. The DNA-DNA recognition forming double-helix structure is identified to occur only in the rod's outer periphery. This study opens up new opportunities of Alq3, one of the most widely used OLED materials, enabling biological recognition.

  15. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  16. The role of shape recognition in figure/ground perception in infancy.

    Science.gov (United States)

    White, Hannah; Jubran, Rachel; Heck, Alison; Chroust, Alyson; Bhatt, Ramesh S

    2018-04-30

    In this study we sought to determine whether infants, like adults, utilize previous experience to guide figure/ground processing. After familiarization to a shape, 5-month-olds preferentially attended to the side of an ambiguous figure/ground test stimulus corresponding to that shape, suggesting that they were viewing that portion as the figure. Infants' failure to exhibit this preference in a control condition in which both sides of the test stimulus were displayed as figures indicated that the results in the experimental condition were not due to a preference between two figure shapes. These findings demonstrate for the first time that figure/ground processing in infancy is sensitive to top-down influence. Thus, a critical aspect of figure/ground processing is functional early in life.

  17. Shape recognition of microbial cells by colloidal cell imprints

    NARCIS (Netherlands)

    Borovicka, J.; Stoyanov, S.D.; Paunov, V.N.

    2013-01-01

    We have engineered a class of colloids which can recognize the shape and size of targeted microbial cells and selectively bind to their surfaces. These imprinted colloid particles, which we called "colloid antibodies", were fabricated by partial fragmentation of silica shells obtained by templating

  18. Hand-operated and rotary ProTaper instruments: a comparison of working time and number of rotations in simulated root canals.

    Science.gov (United States)

    Pasqualini, Damiano; Scotti, Nicola; Tamagnone, Lorenzo; Ellena, Federica; Berutti, Elio

    2008-03-01

    The aim of this study was to compare the effective shaping time and number of rotations required by an endodontist working with hand and rotary ProTaper instruments to completely shape simulated root canals. Eighty Endo Training Blocks (curved canal shape) were used. Manual preflaring was performed with K-Flexofiles #08-10-12-15-17 and #20 Nitiflex at a working length of 18 mm. Specimens were then randomly assigned to 2 different groups (n = 40); group 1 was shaped by using hand ProTaper and group 2 with ProTaper rotary. The number of rotations made in the canal and the effective time required to achieve complete canal shaping were recorded for each instrument. Differences between groups were analyzed with the nonparametric Mann-Whitney U test (P Hand ProTaper required significantly fewer rotations (P ProTaper, whereas the effective working time to fully shape the simulated canal was significantly higher (P hand ProTaper.

  19. Handwritten recognition of Tamil vowels using deep learning

    Science.gov (United States)

    Ram Prashanth, N.; Siddarth, B.; Ganesh, Anirudh; Naveen Kumar, Vaegae

    2017-11-01

    We come across a large volume of handwritten texts in our daily lives and handwritten character recognition has long been an important area of research in pattern recognition. The complexity of the task varies among different languages and it so happens largely due to the similarity between characters, distinct shapes and number of characters which are all language-specific properties. There have been numerous works on character recognition of English alphabets and with laudable success, but regional languages have not been dealt with very frequently and with similar accuracies. In this paper, we explored the performance of Deep Belief Networks in the classification of Handwritten Tamil vowels, and conclusively compared the results obtained. The proposed method has shown satisfactory recognition accuracy in light of difficulties faced with regional languages such as similarity between characters and minute nuances that differentiate them. We can further extend this to all the Tamil characters.

  20. The Hand-Foot Skin Reaction and Quality of Life Questionnaire: An Assessment Tool for Oncology

    OpenAIRE

    Anderson, Roger T.; Keating, Karen N.; Doll, Helen A.; Camacho, Fabian

    2015-01-01

    This study describes the development and validation of a brief, patient self-reported questionnaire (the hand-foot skin reaction and quality of life questionnaire) supporting its suitability for use in clinical research to aid in early recognition of symptoms, to evaluate the effectiveness of agents for hand-foot skin reaction (HFSR) or hand-foot syndrome (HFS) treatment within clinical trials, and to evaluate the impact of these treatments on HFS/R-associated patients’ health-related quality...

  1. Estimating volume, biomass, and potential emissions of hand-piled fuels

    Science.gov (United States)

    Clinton S. Wright; Cameron S. Balog; Jeffrey W. Kelly

    2009-01-01

    Dimensions, volume, and biomass were measured for 121 hand-constructed piles composed primarily of coniferous (n = 63) and shrub/hardwood (n = 58) material at sites in Washington and California. Equations using pile dimensions, shape, and type allow users to accurately estimate the biomass of hand piles. Equations for estimating true pile volume from simple geometric...

  2. Contribution to automatic handwritten characters recognition. Application to optical moving characters recognition

    International Nuclear Information System (INIS)

    Gokana, Denis

    1986-01-01

    This paper describes a research work on computer aided vision relating to the design of a vision system which can recognize isolated handwritten characters written on a mobile support. We use a technique which consists in analyzing information contained in the contours of the polygon circumscribed to the character's shape. These contours are segmented and labelled to give a new set of features constituted by: - right and left 'profiles', - topological and algebraic unvarying properties. A new method of character's recognition induced from this representation based on a multilevel hierarchical technique is then described. In the primary level, we use a fuzzy classification with dynamic programming technique using 'profiles'. The other levels adjust the recognition by using topological and algebraic unvarying properties. Several results are presented and an accuracy of 99 pc was reached for handwritten numeral characters, thereby attesting the robustness of our algorithm. (author) [fr

  3. A Survey of 2D Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Mejda Chihaoui

    2016-09-01

    Full Text Available Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition techniques over the last few years. These techniques can generally be divided into three categories, based on the face data processing methodology. There are methods that use the entire face as input data for the proposed recognition system, methods that do not consider the whole face, but only some features or areas of the face and methods that use global and local face characteristics simultaneously. In this paper, we present an overview of some well-known methods in each of these categories. First, we expose the benefits of, as well as the challenges to the use of face recognition as a biometric tool. Then, we present a detailed survey of the well-known methods by expressing each method’s principle. After that, a comparison between the three categories of face recognition techniques is provided. Furthermore, the databases used in face recognition are mentioned, and some results of the applications of these methods on face recognition databases are presented. Finally, we highlight some new promising research directions that have recently appeared.

  4. T-ray spectroscopy of biomolecules: from chemical recognition toward biochip analysis - horizons and hurdles

    DEFF Research Database (Denmark)

    Fischer, Bernd M.; Helm, Hanspeter; Jepsen, Peter Uhd

    2006-01-01

    In the recent years, there has been an increased interest in the exploitation of the far-infrared spectral region for applications based on chemical recognition. The fact that on the one hand many packaging materials are transparent for THz radiation and on the other hand the THz-spectra of many ...

  5. The Neuropsychology of Familiar Person Recognition from Face and Voice

    Directory of Open Access Journals (Sweden)

    Guido Gainotti

    2014-05-01

    Full Text Available Prosopagnosia has been considered for a long period of time as the most important and almost exclusive disorder in the recognition of familiar people. In recent years, however, this conviction has been undermined by the description of patients showing a concomitant defect in the recognition of familiar faces and voices as a consequence of lesions encroaching upon the right anterior temporal lobe (ATL. These new data have obliged researchers to reconsider on one hand the construct of ‘associative prosopagnosia’ and on the other hand current models of people recognition. A systematic review of the patterns of familiar people recognition disorders observed in patients with right and left ATL lesions has shown that in patients with right ATL lesions face familiarity feelings and the retrieval of person-specific semantic information from faces are selectively affected, whereas in patients with left ATL lesions the defect selectively concerns famous people naming. Furthermore, some patients with right ATL lesions and intact face familiarity feelings show a defect in the retrieval of person-specific semantic knowledge greater from face than from name. These data are at variance with current models assuming: (a that familiarity feelings are generated at the level of person identity nodes (PINs where information processed by various sensory modalities converge, and (b that PINs provide a modality-free gateway to a single semantic system, where information about people is stored in an amodal format. They suggest, on the contrary: (a that familiarity feelings are generated at the level of modality-specific recognition units; (b that face and voice recognition units are represented more in the right than in the left ATLs; (c that in the right ATL are mainly stored person-specific information based on a convergence of perceptual information, whereas in the left ATLs are represented verbally-mediated person-specific information.

  6. Mental rotation of anthropoid hands: a chronometric study

    Directory of Open Access Journals (Sweden)

    L.G. Gawryszewski

    2007-03-01

    Full Text Available It has been shown that mental rotation of objects and human body parts is processed differently in the human brain. But what about body parts belonging to other primates? Does our brain process this information like any other object or does it instead maximize the structural similarities with our homologous body parts? We tried to answer this question by measuring the manual reaction time (MRT of human participants discriminating the handedness of drawings representing the hands of four anthropoid primates (orangutan, chimpanzee, gorilla, and human. Twenty-four right-handed volunteers (13 males and 11 females were instructed to judge the handedness of a hand drawing in palm view by pressing a left/right key. The orientation of hand drawings varied from 0º (fingers upwards to 90º lateral (fingers pointing away from the midline, 180º (fingers downwards and 90º medial (finger towards the midline. The results showed an effect of rotation angle (F(3, 69 = 19.57, P < 0.001, but not of hand identity, on MRTs. Moreover, for all hand drawings, a medial rotation elicited shorter MRTs than a lateral rotation (960 and 1169 ms, respectively, P < 0.05. This result has been previously observed for drawings of the human hand and related to biomechanical constraints of movement performance. Our findings indicate that anthropoid hands are essentially equivalent stimuli for handedness recognition. Since the task involves mentally simulating the posture and rotation of the hands, we wondered if "mirror neurons" could be involved in establishing the motor equivalence between the stimuli and the participants' own hands.

  7. Investigations of Hemispheric Specialization of Self-Voice Recognition

    Science.gov (United States)

    Rosa, Christine; Lassonde, Maryse; Pinard, Claudine; Keenan, Julian Paul; Belin, Pascal

    2008-01-01

    Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the…

  8. Grip-pattern recognition: Applied to a smart gun

    NARCIS (Netherlands)

    Shang, X.

    2008-01-01

    In our work the verification performance of a biometric recognition system based on grip patterns, as part of a smart gun for use by the police ocers, has been investigated. The biometric features are extracted from a two-dimensional pattern of the pressure, exerted on the grip of a gun by the hand

  9. Experimental acquisition of long-range portraits of objects and their recognition

    International Nuclear Information System (INIS)

    Buryi, E V; Kosykh, A E

    1998-01-01

    An experimental investigation was made of recognition of the perspectives of model objects on the basis of the shape of the envelope of a scattered laser pulse. Stable recognition of various perspectives of an object was found to be possible even for high ratios of the probe pulse duration to the time of its propagation along the object surface. (laser applications and other topics in quantum electronics)

  10. Color constancy in 3D-2D face recognition

    Science.gov (United States)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  11. Media identities and media-influenced indentifications Visibility and identity recognition in the media

    Directory of Open Access Journals (Sweden)

    Víctor Fco. Sampedro Blanco

    2004-10-01

    Full Text Available The media establish, in large part, the patterns of visibility and public recognition of collective identities. We define media identities as those that are the object of production and diffusion by the media. From this discourse, the communities and individuals elaborate media-influenced identifications; that is, processes of recognition or banishment; (rearticulating the identity markers that the media offer with other cognitive and emotional sources. The generation and appropriation of the identities are subjected to a media hierarchisation that influences their normalisation or marginalisation. The identities presented by the media and assumed by the audience as part of the official, hegemonic discourse are normalised, whereas the identities and identifications formulated in popular and minority terms are marginalised. After presenting this conceptual and analytical framework, this study attempts to outline the logics that condition the presentation, on the one hand, andthe public recognition, on the other hand, of contemporary identities.

  12. Illumination-Invariant and Deformation-Tolerant Inner Knuckle Print Recognition Using Portable Devices

    Directory of Open Access Journals (Sweden)

    Xuemiao Xu

    2015-02-01

    Full Text Available We propose a novel biometric recognition method that identifies the inner knuckle print (IKP. It is robust enough to confront uncontrolled lighting conditions, pose variations and low imaging quality. Such robustness is crucial for its application on portable devices equipped with consumer-level cameras. We achieve this robustness by two means. First, we propose a novel feature extraction scheme that highlights the salient structure and suppresses incorrect and/or unwanted features. The extracted IKP features retain simple geometry and morphology and reduce the interference of illumination. Second, to counteract the deformation induced by different hand orientations, we propose a novel structure-context descriptor based on local statistics. To our best knowledge, we are the first to simultaneously consider the illumination invariance and deformation tolerance for appearance-based low-resolution hand biometrics. Settings in previous works are more restrictive. They made strong assumptions either about the illumination condition or the restrictive hand orientation. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of recognition accuracy, especially under uncontrolled lighting conditions and the flexible hand orientation requirement.

  13. Virtual Control of Prosthetic Hand Based on Grasping Patterns and Estimated Force from Semg

    Directory of Open Access Journals (Sweden)

    Zhu Gao-Ke

    2016-01-01

    Full Text Available Myoelectric prosthetic hands aim to serve upper limb amputees. The myoelectric control of the hand grasp action is a kind of real-time or online method. Thus it is of great necessity to carry on a study of online prosthetic hand electrical control. In this paper, the strategy of simultaneous EMG decoding of grasping patterns and grasping force was realized by controlling a virtual multi-degree-freedom prosthetic hand and a real one-degree-freedom prosthetic hand simultaneously. The former realized the grasping patterns from the recognition of the sEMG pattern. The other implemented the grasping force from sEMG force decoding. The results show that the control method is effective and feasible.

  14. Hand hygiene in the intensive care unit.

    Science.gov (United States)

    Tschudin-Sutter, Sarah; Pargger, Hans; Widmer, Andreas F

    2010-08-01

    Healthcare-associated infections affect 1.4 million patients at any time worldwide, as estimated by the World Health Organization. In intensive care units, the burden of healthcare-associated infections is greatly increased, causing additional morbidity and mortality. Multidrug-resistant pathogens are commonly involved in such infections and render effective treatment challenging. Proper hand hygiene is the single most important, simplest, and least expensive means of preventing healthcare-associated infections. In addition, it is equally important to stop transmission of multidrug-resistant pathogens. According to the Centers for Disease Control and Prevention and World Health Organization guidelines on hand hygiene in health care, alcohol-based handrub should be used as the preferred means for routine hand antisepsis. Alcohols have excellent in vitro activity against Gram-positive and Gram-negative bacteria, including multidrug-resistant pathogens, such as methicillin-resistant Staphylococcus aureus and vancomycin-resistant enterococci, Mycobacterium tuberculosis, a variety of fungi, and most viruses. Some pathogens, however, such as Clostridium difficile, Bacillus anthracis, and noroviruses, may require special hand hygiene measures. Failure to provide user friendliness of hand hygiene equipment and shortage of staff are predictors for noncompliance, especially in the intensive care unit setting. Therefore, practical approaches to promote hand hygiene in the intensive care unit include provision of a minimal number of handrub dispensers per bed, monitoring of compliance, and choice of the most attractive product. Lack of knowledge of guidelines for hand hygiene, lack of recognition of hand hygiene opportunities during patient care, and lack of awareness of the risk of cross-transmission of pathogens are barriers to good hand hygiene practices. Multidisciplinary programs to promote increased use of alcoholic handrub lead to an increased compliance of healthcare

  15. Shape adaptive, robust iris feature extraction from noisy iris images.

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  16. Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training.

    Science.gov (United States)

    Kutafina, Ekaterina; Laukamp, David; Bettermann, Ralf; Schroeder, Ulrik; Jonas, Stephan M

    2016-08-03

    In this paper, we propose a novel approach to eLearning that makes use of smart wearable sensors. Traditional eLearning supports the remote and mobile learning of mostly theoretical knowledge. Here we discuss the possibilities of eLearning to support the training of manual skills. We employ forearm armbands with inertial measurement units and surface electromyography sensors to detect and analyse the user's hand motions and evaluate their performance. Hand hygiene is chosen as the example activity, as it is a highly standardized manual task that is often not properly executed. The World Health Organization guidelines on hand hygiene are taken as a model of the optimal hygiene procedure, due to their algorithmic structure. Gesture recognition procedures based on artificial neural networks and hidden Markov modeling were developed, achieving recognition rates of 98 . 30 % ( ± 1 . 26 % ) for individual gestures. Our approach is shown to be promising for further research and application in the mobile eLearning of manual skills.

  17. Nearest neighbour classification of Indian sign language gestures ...

    Indian Academy of Sciences (India)

    In the ideal case, a gesture recognition ... Every geographical region has developed its own sys- ... et al [10] present a study on vision-based static hand shape .... tures, and neural networks for recognition. ..... We used the city-block dis-.

  18. Electromyography (EMG) signal recognition using combined discrete wavelet transform based adaptive neuro-fuzzy inference systems (ANFIS)

    Science.gov (United States)

    Arozi, Moh; Putri, Farika T.; Ariyanto, Mochammad; Khusnul Ari, M.; Munadi, Setiawan, Joga D.

    2017-01-01

    People with disabilities are increasing from year to year either due to congenital factors, sickness, accident factors and war. One form of disability is the case of interruptions of hand function. The condition requires and encourages the search for solutions in the form of creating an artificial hand with the ability as a human hand. The development of science in the field of neuroscience currently allows the use of electromyography (EMG) to control the motion of artificial prosthetic hand into the necessary use of EMG as an input signal to control artificial prosthetic hand. This study is the beginning of a significant research planned in the development of artificial prosthetic hand with EMG signal input. This initial research focused on the study of EMG signal recognition. Preliminary results show that the EMG signal recognition using combined discrete wavelet transform and Adaptive Neuro-Fuzzy Inference System (ANFIS) produces accuracy 98.3 % for training and 98.51% for testing. Thus the results can be used as an input signal for Simulink block diagram of a prosthetic hand that will be developed on next study. The research will proceed with the construction of artificial prosthetic hand along with Simulink program controlling and integrating everything into one system.

  19. Dual-band left-handed metamaterials fabricated by using tree-shaped fractal

    International Nuclear Information System (INIS)

    Xu He-Xiu; Wang Guang-Ming; Yang Zi-Mu; Wang Jia-Fu

    2012-01-01

    A method of fabricating dual-band left-handed metematerials (LHMs) is investigated numerically and experimentally by single-sided tree-like fractals. The resulting structure features multiband magnetic resonances and two electric resonances. By appropriately adjusting the dimensions, two left-handed (LH) bands with simultaneous negative permittivity and permeability are engineered and are validated by full-wave eigenmode analysis and measurement as well in the microwave frequency range. To study the multi-resonant mechanism in depth, the LHM is analysed from three different perspectives of field distribution analysis, circuit model analysis, and geometrical parameters evaluation. The derived formulae are consistent with all simulated results and resulting electromagnetic phenomena, indicating the effectiveness of the established theory. The method provides an alternative to the design of multi-band LHM and has the advantage of not requiring two individual resonant particles and electrically continuous wires, which in turn facilitates planar design and considerably simplifies the fabrication. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  20. Information booklet on personal protective equipment: arm and hand protection

    International Nuclear Information System (INIS)

    1992-01-01

    Fire, heat, cold, electro-magnetic and ionising radiation, electricity, chemicals, impacts, cuts, abrasion, etc. are the common hazards for arms and hands at work. The gloves chosen for protection of the arm and hand should cover those parts adequately and the material of the gloves should be capable of offering protection against the specific hazard involved. Criteria for choosing arm and hand protection equipment will be based on their shape and part of the arm and hand protected. Guide lines for choosing such personal protection equipment for nuclear facilities are given. (M.K.V.). 3 annexures, 1 appendix

  1. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  2. sEMG-Based Gesture Recognition with Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhen Ding

    2018-06-01

    Full Text Available The traditional classification methods for limb motion recognition based on sEMG have been deeply researched and shown promising results. However, information loss during feature extraction reduces the recognition accuracy. To obtain higher accuracy, the deep learning method was introduced. In this paper, we propose a parallel multiple-scale convolution architecture. Compared with the state-of-art methods, the proposed architecture fully considers the characteristics of the sEMG signal. Larger sizes of kernel filter than commonly used in other CNN-based hand recognition methods are adopted. Meanwhile, the characteristics of the sEMG signal, that is, muscle independence, is considered when designing the architecture. All the classification methods were evaluated on the NinaPro database. The results show that the proposed architecture has the highest recognition accuracy. Furthermore, the results indicate that parallel multiple-scale convolution architecture with larger size of kernel filter and considering muscle independence can significantly increase the classification accuracy.

  3. Real-time intelligent pattern recognition algorithm for surface EMG signals

    Directory of Open Access Journals (Sweden)

    Jahed Mehran

    2007-12-01

    Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal

  4. A Specific Role for Efferent Information in Self-Recognition

    Science.gov (United States)

    Tsakiris, M.; Haggard, P.; Franck, N.; Mainy, N.; Sirigu, A.

    2005-01-01

    We investigated the specific contribution of efferent information in a self-recognition task. Subjects experienced a passive extension of the right index finger, either as an effect of moving their left hand via a lever ('self-generated action'), or imposed externally by the experimenter ('externally-generated action'). The visual feedback was…

  5. Artificial Skin Ridges Enhance Local Tactile Shape Discrimination

    Directory of Open Access Journals (Sweden)

    Shuzhi Sam Ge

    2011-09-01

    Full Text Available One of the fundamental requirements for an artificial hand to successfully grasp and manipulate an object is to be able to distinguish different objects’ shapes and, more specifically, the objects’ surface curvatures. In this study, we investigate the possibility of enhancing the curvature detection of embedded tactile sensors by proposing a ridged fingertip structure, simulating human fingerprints. In addition, a curvature detection approach based on machine learning methods is proposed to provide the embedded sensors with the ability to discriminate the surface curvature of different objects. For this purpose, a set of experiments were carried out to collect tactile signals from a 2 × 2 tactile sensor array, then the signals were processed and used for learning algorithms. To achieve the best possible performance for our machine learning approach, three different learning algorithms of Naïve Bayes (NB, Artificial Neural Networks (ANN, and Support Vector Machines (SVM were implemented and compared for various parameters. Finally, the most accurate method was selected to evaluate the proposed skin structure in recognition of three different curvatures. The results showed an accuracy rate of 97.5% in surface curvature discrimination.

  6. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    Science.gov (United States)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  7. The failures of root canal preparation with hand ProTaper.

    Science.gov (United States)

    Bătăiosu, Marilena; Diaconu, Oana; Moraru, Iren; Dăguci, C; Tuculină, Mihaela; Dăguci, Luminiţa; Gheorghiţă, Lelia

    2012-07-01

    The failures of root canal preparation are due to some anatomical deviation (canal in "C" or "S") and some technique errors. The technique errors are usually present in canal root cleansing and shaping stage and are the result of endodontic treatment objectives deviation. Our study was made on technique errors while preparing the canal roots with hand ProTaper. Our study was made "in vitro" on 84 extracted teeth (molars, premolars, incisors and canines). The canal root of these teeth were cleansed and shaped with hand ProTaper by crown-down technique and canal irrigation with NaOCl(2,5%). The dental preparation control was made by X-ray. During canal root preparation some failures were observed like: canal root overinstrumentation, zipping and stripping phenomenon, discarded and/or fractured instruments. Hand ProTaper represents a revolutionary progress of endodontic treatment, but a deviation from accepted rules of canal root instrumentation can lead to failures of endodontic treatment.

  8. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation...

  9. Equipment for measuring contamination of hands with radioactive material

    International Nuclear Information System (INIS)

    Erban, J.; Kleinbauer, K.; Husak, V.; Grigar, O.

    1986-01-01

    The claimed device consists of a scintillation detector mounted in a shielding case consisting of rings. The shielding case is provided with a cavity with an inlet opening lined with polyethylene foil. The cavity shape, shielding and replaceable foil guarantee minimizing the interfering effect of radiation sources in the vicinity and of contamination of the device. Gradually inserting the hand in the cavity or suitably placing the hand can locate contamination of the hand surface. The sensitivity of the device for 125 I and 99 Tc is 200-times higher than that of Geiger-Mueller counter instruments. (M.D.)

  10. Impaired processing of self-face recognition in anorexia nervosa.

    Science.gov (United States)

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p face recognition.

  11. A Survey on Banknote Recognition Methods by Various Sensors

    Science.gov (United States)

    Lee, Ji Woo; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    Despite a decrease in the use of currency due to the recent growth in the use of electronic financial transactions, real money transactions remain very important in the global market. While performing transactions with real money, touching and counting notes by hand, is still a common practice in daily life, various types of automated machines, such as ATMs and banknote counters, are essential for large-scale and safe transactions. This paper presents studies that have been conducted in four major areas of research (banknote recognition, counterfeit banknote detection, serial number recognition, and fitness classification) in the accurate banknote recognition field by various sensors in such automated machines, and describes the advantages and drawbacks of the methods presented in those studies. While to a limited extent some surveys have been presented in previous studies in the areas of banknote recognition or counterfeit banknote recognition, this paper is the first of its kind to review all four areas. Techniques used in each of the four areas recognize banknote information (denomination, serial number, authenticity, and physical condition) based on image or sensor data, and are actually applied to banknote processing machines across the world. This study also describes the technological challenges faced by such banknote recognition techniques and presents future directions of research to overcome them. PMID:28208733

  12. Chinese wine classification system based on micrograph using combination of shape and structure features

    Science.gov (United States)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  13. Object recognition memory in zebrafish.

    Science.gov (United States)

    May, Zacnicte; Morrill, Adam; Holcombe, Adam; Johnston, Travis; Gallup, Joshua; Fouad, Karim; Schalomon, Melike; Hamilton, Trevor James

    2016-01-01

    The novel object recognition, or novel-object preference (NOP) test is employed to assess recognition memory in a variety of organisms. The subject is exposed to two identical objects, then after a delay, it is placed back in the original environment containing one of the original objects and a novel object. If the subject spends more time exploring one object, this can be interpreted as memory retention. To date, this test has not been fully explored in zebrafish (Danio rerio). Zebrafish possess recognition memory for simple 2- and 3-dimensional geometrical shapes, yet it is unknown if this translates to complex 3-dimensional objects. In this study we evaluated recognition memory in zebrafish using complex objects of different sizes. Contrary to rodents, zebrafish preferentially explored familiar over novel objects. Familiarity preference disappeared after delays of 5 mins. Leopard danios, another strain of D. rerio, also preferred the familiar object after a 1 min delay. Object preference could be re-established in zebra danios by administration of nicotine tartrate salt (50mg/L) prior to stimuli presentation, suggesting a memory-enhancing effect of nicotine. Additionally, exploration biases were present only when the objects were of intermediate size (2 × 5 cm). Our results demonstrate zebra and leopard danios have recognition memory, and that low nicotine doses can improve this memory type in zebra danios. However, exploration biases, from which memory is inferred, depend on object size. These findings suggest zebrafish ecology might influence object preference, as zebrafish neophobia could reflect natural anti-predatory behaviour. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Object Recognition and Localization: The Role of Tactile Sensors

    Directory of Open Access Journals (Sweden)

    Achint Aggarwal

    2014-02-01

    Full Text Available Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments.

  15. The exchangeability of shape

    Directory of Open Access Journals (Sweden)

    Kaba Dramane

    2010-10-01

    Full Text Available Abstract Background Landmark based geometric morphometrics (GM allows the quantitative comparison of organismal shapes. When applied to systematics, it is able to score shape changes which often are undetectable by traditional morphological studies and even by classical morphometric approaches. It has thus become a fast and low cost candidate to identify cryptic species. Due to inherent mathematical properties, shape variables derived from one set of coordinates cannot be compared with shape variables derived from another set. Raw coordinates which produce these shape variables could be used for data exchange, however they contain measurement error. The latter may represent a significant obstacle when the objective is to distinguish very similar species. Results We show here that a single user derived dataset produces much less classification error than a multiple one. The question then becomes how to circumvent the lack of exchangeability of shape variables while preserving a single user dataset. A solution to this question could lead to the creation of a relatively fast and inexpensive systematic tool adapted for the recognition of cryptic species. Conclusions To preserve both exchangeability of shape and a single user derived dataset, our suggestion is to create a free access bank of reference images from which one can produce raw coordinates and use them for comparison with external specimens. Thus, we propose an alternative geometric descriptive system that separates 2-D data gathering and analyzes.

  16. Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing

    Directory of Open Access Journals (Sweden)

    Fei Hui

    2017-03-01

    Full Text Available Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.

  17. Hand function with touch screen technology in children with normal hand formation, congenital differences, and neuromuscular disease.

    Science.gov (United States)

    Shin, David H; Bohn, Deborah K; Agel, Julie; Lindstrom, Katy A; Cronquist, Sara M; Van Heest, Ann E

    2015-05-01

    To measure and compare hand function for children with normal hand development, congenital hand differences (CHD), and neuromuscular disease (NMD) using a function test with touch screen technology designed as an iPhone application. We measured touch screen hand function in 201 children including 113 with normal hand formation, 43 with CHD, and 45 with NMD. The touch screen test was developed on the iOS platform using an Apple iPhone 4. We measured 4 tasks: touching dots on a 3 × 4 grid, dragging shapes, use of the touch screen camera, and typing a line of text. The test takes 60 to 120 seconds and includes a pretest to familiarize the subject with the format. Each task is timed independently and the overall time is recorded. Children with normal hand development took less time to complete all 4 subtests with increasing age. When comparing children with normal hand development with those with CHD or NMD, in children aged less than 5 years we saw minimal differences; those aged 5 to 6 years with CHD took significantly longer total time; those aged 7 to 8 years with NMD took significantly longer total time; those aged 9 to 11 years with CHD took significantly longer total time; and those aged 12 years and older with NMD took significantly longer total time. Touch screen technology has becoming increasingly relevant to hand function in modern society. This study provides standardized age norms and shows that our test discriminates between normal hand development and that in children with CHD or NMD. Diagnostic III. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  18. COGNITIVE STYLE OF A PERSON AS A FACTOR OF EFFECTIVE EMOTION RECOGNITION

    Directory of Open Access Journals (Sweden)

    E V Belovol

    2015-12-01

    Full Text Available Facial expression is one of the most informative sources of non-verbal information. Early studies on the ability to recognize emotions over the face, pointed to the universality of emotion expression and recognition. More recent studies have shown a combination of universal mechanisms and cultural-specific patterns. The process of emotion recognition is based on face perception that’s why the in-group effect should be taken under consideration. The in-group advantage hypothesis posits that observers are more accurate at recognizing facial expressions displayed by the same culture compared to other culture members. On the other hand, the process of emotion recognition is determined by such cognitive features as a cognitive style. This article describes the approaches to emotion expression and recognition, culture-specific features to basic emotion expression. It also describes factors related to recognition of basic emotions by people from different cultures. It was discovered that field-independent people are more accurate in emotion recognition than field- dependent people because they are able to distinguish markers of emotions. There was found no correlation between successful emotion recognition and the observers’ gender, no correlation between successful emotion recognition and the observers’ race

  19. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  20. User-independent accelerometer-based gesture recognition for mobile devices

    Directory of Open Access Journals (Sweden)

    Eduardo METOLA

    2013-07-01

    Full Text Available Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone

  1. Isolated Hand Palsy Due to Small Cortical Infarcts: A Report of Two Cases

    Directory of Open Access Journals (Sweden)

    Meliha Tan

    2009-03-01

    Full Text Available The cortical motor hand area is a knob-like structure of the precentral gyrus, with an inverted omega or horizontal epsilon shape. Isolated hand weakness is infrequently observed and is usually due to small cortical infarcts of this hand knob structure. Isolated hand palsy is sometimes restricted to radial-sided fingers or ulnar sided-fingers. Uniform weakness in all fingers may also occur. We present 2 patients with small cortical infarcts of the cortical hand knob due to different etiologies. A 61-year-old male had right hand weakness restricted to his first and second digits. He had a small cortical infarct on the hand knob area due to emboli from ulcerative plaque of the ipsilateral internal carotid artery. The other patient, a 23-year-old female with sickle cell anemia, had uniform left hand weakness due to an epsilon-shaped infarct on the right precentral gyrus. An obstruction of the small cerebral artery supply to the hand knob area due to sickle cell anemia was the likely pathogenic mechanism. It is suggested that isolated hand weakness due to small cortical infarcts may have different etiologies, most commonly homodynamic or embolic processes. Conditions that rarely cause brain infarction, such as sickle cell anemia, deserve clinical attention. Investigations of strokes must include anemia tests. Patients with predominant weakness of the radial group of fingers due to cortical infarct must be checked for embolism

  2. Event-related potentials during word mapping to object shape predict toddlers’ vocabulary size

    Directory of Open Access Journals (Sweden)

    Kristina eBorgström

    2015-02-01

    Full Text Available What role does attention to different object properties play in early vocabulary development? This longitudinal study using event-related potentials in combination with behavioral measures investigated 20- and 24-month-olds’ (n = 38; n = 34; overlapping n = 24 ability to use object shape and object part information in word-object mapping. The N400 component was used to measure semantic priming by images containing shape or detail information. At 20 months, the N400 to words primed by object shape varied in topography and amplitude depending on vocabulary size, and these differences predicted productive vocabulary size at 24 months. At 24 months, when most of the children had vocabularies of several hundred words, the relation between vocabulary size and the N400 effect in a shape context was weaker. Detached object parts did not function as word primes regardless of age or vocabulary size, although the part-objects were identified behaviorally. The behavioral measure, however, also showed relatively poor recognition of the part-objects compared to the shape-objects. These three findings provide new support for the link between shape recognition and early vocabulary development.

  3. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  4. Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes.

    Science.gov (United States)

    Stoycheva, Polina; Tiippana, Kaisa

    2018-03-14

    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.

  5. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  6. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    Directory of Open Access Journals (Sweden)

    Min Peng

    2017-10-01

    Full Text Available Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  7. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition.

    Science.gov (United States)

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  8. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    Science.gov (United States)

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve. PMID:29081753

  9. Hand motion segmentation against skin colour background in breast awareness applications.

    Science.gov (United States)

    Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick

    2004-01-01

    Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.

  10. Evaluation of calix[4]arene tethered Schiff bases for anion recognition

    International Nuclear Information System (INIS)

    Chawla, H.M.; Munjal, Priyanka

    2016-01-01

    Two calix[4]arene tethered Schiff base derivatives (L1 and L2) have been synthesized and their ion recognition capability has been evaluated through NMR, UV–vis and fluorescence spectroscopy. L1 interacts with cyanide ions very selectively to usher a significant change in color and fluorescence intensity. On the other hand L2 does not show selectivity for anion sensing despite having the same functional groups as those present in L1. The differential observations may be attributed to plausible stereo control of anion recognition and tautomerization in the synthesized Schiff base derivatives.

  11. Evaluation of calix[4]arene tethered Schiff bases for anion recognition

    Energy Technology Data Exchange (ETDEWEB)

    Chawla, H.M., E-mail: hmchawla@chemistry.iitd.ac.in; Munjal, Priyanka

    2016-11-15

    Two calix[4]arene tethered Schiff base derivatives (L1 and L2) have been synthesized and their ion recognition capability has been evaluated through NMR, UV–vis and fluorescence spectroscopy. L1 interacts with cyanide ions very selectively to usher a significant change in color and fluorescence intensity. On the other hand L2 does not show selectivity for anion sensing despite having the same functional groups as those present in L1. The differential observations may be attributed to plausible stereo control of anion recognition and tautomerization in the synthesized Schiff base derivatives.

  12. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    Science.gov (United States)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  13. Automatic anatomy recognition on CT images with pathology

    Science.gov (United States)

    Huang, Lidong; Udupa, Jayaram K.; Tong, Yubing; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps - model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.

  14. Sonographic Diagnosis of Tubal Cancer with IOTA Simple Rules Plus Pattern Recognition

    Science.gov (United States)

    Tongsong, Theera; Wanapirak, Chanane; Tantipalakorn, Charuwan; Tinnangwattana, Dangcheewan

    2017-11-26

    Objective: To evaluate diagnostic performance of IOTA simple rules plus pattern recognition in predicting tubal cancer. Methods: Secondary analysis was performed on prospective database of our IOTA project. The patients recruited in the project were those who were scheduled for pelvic surgery due to adnexal masses. The patients underwent ultrasound examinations within 24 hours before surgery. On ultrasound examination, the masses were evaluated using the well-established IOTA simple rules plus pattern recognition (sausage-shaped appearance, incomplete septum, visible ipsilateral ovaries) to predict tubal cancer. The gold standard diagnosis was based on histological findings or operative findings. Results: A total of 482 patients, including 15 cases of tubal cancer, were evaluated by ultrasound preoperatively. The IOTA simple rules plus pattern recognition gave a sensitivity of 86.7% (13 in 15) and specificity of 97.4%. Sausage-shaped appearance was identified in nearly all cases (14 in 15). Incomplete septa and normal ovaries could be identified in 33.3% and 40%, respectively. Conclusion: IOTA simple rules plus pattern recognition is relatively effective in predicting tubal cancer. Thus, we propose the simple scheme in diagnosis of tubal cancer as follows. First of all, the adnexal masses are evaluated with IOTA simple rules. If the B-rules could be applied, tubal cancer is reliably excluded. If the M-rules could be applied or the result is inconclusive, careful delineation of the mass with pattern recognition should be performed. Creative Commons Attribution License

  15. More than two HANDs to tango.

    Science.gov (United States)

    Kolson, Dennis; Buch, Shilpa

    2013-12-01

    Developing a validated tool for the rapid and efficient assessment of cognitive functioning in HIV-infected patients in a typical outpatient clinical setting has been an unmet goal of HIV research since the recognition of the syndrome of HIV-associated dementia (HAD) nearly 20 years ago. In this issue of JNIP Cross et al. report the application of the International HIV Dementia Scale (IHDS) in a U.S.-based urban outpatient clinic to evaluate its utility as a substitute for the more time- and effort-demanding formalized testing criteria known as the Frascati criteria that was developed in 2007 to define the syndrome of HIV-associated neurocognitive disorders (HAND). In this study an unselected cohort of 507 individuals (68 % African American) that were assessed using the IHDS in a cross-sectional study revealed a 41 % prevalence of cognitive impairment (labeled ‘symptomatic HAND’) that was associated with African American race, older age, unemployment, education level, and depression. While the associations between cognitive impairment and older age, education, unemployment status and depression in HIV-infected patients are not surprising, the association with African American ancestry and cognitive impairment in the setting of HIV infection is a novel finding of this study. This commentary discusses several important issues raised by the study, including the pitfalls of assessing cognitive functioning with rapid screening tools, cognitive testing criteria, normative testing control groups, accounting for HAND co-morbidity factors, considerations for clinical trials assessing HAND, and selective population vulnerability to HAND.

  16. A Robust and Fast Computation Touchless Palm Print Recognition System Using LHEAT and the IFkNCN Classifier

    Directory of Open Access Journals (Sweden)

    Haryati Jaafar

    2015-01-01

    Full Text Available Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN, was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%.

  17. Incomplete contour representations and shape descriptors : ICR test studies

    NARCIS (Netherlands)

    Ghosh, Anarta; Petkov, Nicolai; Gregorio, MD; DiMaio,; Frucci, M; Musio, C

    2005-01-01

    Inspired by psychophysical studies of the human cognitive abilities we propose a novel aspect and a method for performance evaluation of contour based shape recognition algorithms regarding their robustness to incompleteness of contours. We use complete contour representations of objects as a

  18. Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors

    Science.gov (United States)

    Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.

    We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.

  19. 2D Affine and Projective Shape Analysis.

    Science.gov (United States)

    Bryner, Darshan; Klassen, Eric; Huiling Le; Srivastava, Anuj

    2014-05-01

    Current techniques for shape analysis tend to seek invariance to similarity transformations (rotation, translation, and scale), but certain imaging situations require invariance to larger groups, such as affine or projective groups. Here we present a general Riemannian framework for shape analysis of planar objects where metrics and related quantities are invariant to affine and projective groups. Highlighting two possibilities for representing object boundaries-ordered points (or landmarks) and parameterized curves-we study different combinations of these representations (points and curves) and transformations (affine and projective). Specifically, we provide solutions to three out of four situations and develop algorithms for computing geodesics and intrinsic sample statistics, leading up to Gaussian-type statistical models, and classifying test shapes using such models learned from training data. In the case of parameterized curves, we also achieve the desired goal of invariance to re-parameterizations. The geodesics are constructed by particularizing the path-straightening algorithm to geometries of current manifolds and are used, in turn, to compute shape statistics and Gaussian-type shape models. We demonstrate these ideas using a number of examples from shape and activity recognition.

  20. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors

    NARCIS (Netherlands)

    Shoaib, M.; Bosch, S.; Durmaz, O.; Scholten, Johan; Havinga, Paul J.M.

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such

  1. Virtual hand: a 3D tactile interface to virtual environments

    Science.gov (United States)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  2. Neural network application for thermal image recognition of low-resolution objects

    Science.gov (United States)

    Fang, Yi-Chin; Wu, Bo-Wen

    2007-02-01

    In the ever-changing situation on a battle field, accurate recognition of a distant object is critical to a commander's decision-making and the general public's safety. Efficiently distinguishing between an enemy's armoured vehicles and ordinary civilian houses under all weather conditions has become an important research topic. This study presents a system for recognizing an armoured vehicle by distinguishing marks and contours. The characteristics of 12 different shapes and 12 characters are used to explore thermal image recognition under the circumstance of long distance and low resolution. Although the recognition capability of human eyes is superior to that of artificial intelligence under normal conditions, it tends to deteriorate substantially under long-distance and low-resolution scenarios. This study presents an effective method for choosing features and processing images. The artificial neural network technique is applied to further improve the probability of accurate recognition well beyond the limit of the recognition capability of human eyes.

  3. Manual activity shapes structure and function in contralateral human motor hand area

    DEFF Research Database (Denmark)

    Granert, Oliver; Peller, Martin; Gaser, Christian

    2011-01-01

    which was designed to improve handwriting-associated dystonia. Initially the dystonic hand was immobilized for 4 weeks with the intention to reverse faulty plasticity. After immobilization, patients accomplished a motor re-training for 8 weeks. T1-weighted MRIs of the whole brain and single-pulse TMS...

  4. Two-stage neural-network-based technique for Urdu character two-dimensional shape representation, classification, and recognition

    Science.gov (United States)

    Megherbi, Dalila B.; Lodhi, S. M.; Boulenouar, A. J.

    2001-03-01

    This work is in the field of automated document processing. This work addresses the problem of representation and recognition of Urdu characters using Fourier representation and a Neural Network architecture. In particular, we show that a two-stage Neural Network scheme is used here to make classification of 36 Urdu characters into seven sub-classes namely subclasses characterized by seven proposed and defined fuzzy features specifically related to Urdu characters. We show that here Fourier Descriptors and Neural Network provide a remarkably simple way to draw definite conclusions from vague, ambiguous, noisy or imprecise information. In particular, we illustrate the concept of interest regions and describe a framing method that provides a way to make the proposed technique for Urdu characters recognition robust and invariant to scaling and translation. We also show that a given character rotation is dealt with by using the Hotelling transform. This transform is based upon the eigenvalue decomposition of the covariance matrix of an image, providing a method of determining the orientation of the major axis of an object within an image. Finally experimental results are presented to show the power and robustness of the proposed two-stage Neural Network based technique for Urdu character recognition, its fault tolerance, and high recognition accuracy.

  5. Latin Letters Recognition Using Optical Character Recognition to Convert Printed Media Into Digital Format

    Directory of Open Access Journals (Sweden)

    Rio Anugrah

    2017-12-01

    Full Text Available Printed media is still popular now days society. Unfortunately, such media encountered several drawbacks. For example, this type of media consumes large storage that impact in high maintenance cost. To keep printed information more efficient and long-lasting, people usually convert it into digital format. In this paper, we built Optical Character Recognition (OCR system to enable automatic conversion the image containing the sentence in Latin characters into digital text-shaped information. This system consists of several interrelated stages including preprocessing, segmentation, feature extraction, classifier, model and recognition. In preprocessing, the median filter is used to clarify the image from noise and the Otsu’s function is used to binarize the image. It followed by character segmentation using connected component labeling. Artificial neural network (ANN is used for feature extraction to recognize the character. The result shows that this system enable to recognize the characters in the image whose success rate is influenced by the training of the system.

  6. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE

    Science.gov (United States)

    Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas

    2016-01-01

    To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782

  7. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE

    Directory of Open Access Journals (Sweden)

    Birger Kollmeier

    2016-06-01

    Full Text Available To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE is extended here using the Attenuation and Distortion (A+D approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A- and Distortion (D- parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified Speech Intelligibility Index approach which is based on the individual threshold data only.

  8. Arguments Against a Configural Processing Account of Familiar Face Recognition.

    Science.gov (United States)

    Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M

    2015-07-01

    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.

  9. 3D shape representation with spatial probabilistic distribution of intrinsic shape keypoints

    Science.gov (United States)

    Ghorpade, Vijaya K.; Checchin, Paul; Malaterre, Laurent; Trassoudaine, Laurent

    2017-12-01

    The accelerated advancement in modeling, digitizing, and visualizing techniques for 3D shapes has led to an increasing amount of 3D models creation and usage, thanks to the 3D sensors which are readily available and easy to utilize. As a result, determining the similarity between 3D shapes has become consequential and is a fundamental task in shape-based recognition, retrieval, clustering, and classification. Several decades of research in Content-Based Information Retrieval (CBIR) has resulted in diverse techniques for 2D and 3D shape or object classification/retrieval and many benchmark data sets. In this article, a novel technique for 3D shape representation and object classification has been proposed based on analyses of spatial, geometric distributions of 3D keypoints. These distributions capture the intrinsic geometric structure of 3D objects. The result of the approach is a probability distribution function (PDF) produced from spatial disposition of 3D keypoints, keypoints which are stable on object surface and invariant to pose changes. Each class/instance of an object can be uniquely represented by a PDF. This shape representation is robust yet with a simple idea, easy to implement but fast enough to compute. Both Euclidean and topological space on object's surface are considered to build the PDFs. Topology-based geodesic distances between keypoints exploit the non-planar surface properties of the object. The performance of the novel shape signature is tested with object classification accuracy. The classification efficacy of the new shape analysis method is evaluated on a new dataset acquired with a Time-of-Flight camera, and also, a comparative evaluation on a standard benchmark dataset with state-of-the-art methods is performed. Experimental results demonstrate superior classification performance of the new approach on RGB-D dataset and depth data.

  10. Word and face recognition deficits following posterior cerebral artery stroke

    DEFF Research Database (Denmark)

    Kuhn, Christina D.; Asperud Thomsen, Johanne; Delfi, Tzvetelina

    2016-01-01

    Abstract Recent findings have challenged the existence of category specific brain areas for perceptual processing of words and faces, suggesting the existence of a common network supporting the recognition of both. We examined the performance of patients with focal lesions in posterior cortical...... areas to investigate whether deficits in recognition of words and faces systematically co-occur as would be expected if both functions rely on a common cerebral network. Seven right-handed patients with unilateral brain damage following stroke in areas supplied by the posterior cerebral artery were...... included (four with right hemisphere damage, three with left, tested at least 1 year post stroke). We examined word and face recognition using a delayed match-to-sample paradigm using four different categories of stimuli: cropped faces, full faces, words, and cars. Reading speed and word length effects...

  11. Rapid de novo shape encoding: a challenge to connectionist modeling

    OpenAIRE

    Greene, Ernest

    2018-01-01

    Neural network (connectionist) models are designed to encode image features and provide the building blocks for object and shape recognition. These models generally call for: a) initial diffuse connections from one neuron population to another, and b) training to bring about a functional change in those connections so that one or more high-tier neurons will selectively respond to a specific shape stimulus. Advanced models provide for translation, size, and rotation invariance. The present dis...

  12. Modeling Self-Occlusions/Disocclusions in Dynamic Shape and Appearance Tracking for Obtaining Precise Shape

    KAUST Repository

    Yang, Yanchao

    2013-05-01

    We present a method to determine the precise shape of a dynamic object from video. This problem is fundamental to computer vision, and has a number of applications, for example, 3D video/cinema post-production, activity recognition and augmented reality. Current tracking algorithms that determine precise shape can be roughly divided into two categories: 1) Global statistics partitioning methods, where the shape of the object is determined by discriminating global image statistics, and 2) Joint shape and appearance matching methods, where a template of the object from the previous frame is matched to the next image. The former is limited in cases of complex object appearance and cluttered background, where global statistics cannot distinguish between the object and background. The latter is able to cope with complex appearance and a cluttered background, but is limited in cases of camera viewpoint change and object articulation, which induce self-occlusions and self-disocclusions of the object of interest. The purpose of this thesis is to model self-occlusion/disocclusion phenomena in a joint shape and appearance tracking framework. We derive a non-linear dynamic model of the object shape and appearance taking into account occlusion phenomena, which is then used to infer self-occlusions/disocclusions, shape and appearance of the object in a variational optimization framework. To ensure robustness to other unmodeled phenomena that are present in real-video sequences, the Kalman filter is used for appearance updating. Experiments show that our method, which incorporates the modeling of self-occlusion/disocclusion, increases the accuracy of shape estimation in situations of viewpoint change and articulation, and out-performs current state-of-the-art methods for shape tracking.

  13. Intrinsic shapes of discy and boxy ellipticals

    International Nuclear Information System (INIS)

    Fasano, Giovanni

    1991-01-01

    Statistical tests for intrinsic shapes of elliptical galaxies have given so far inconclusive and sometimes contradictory results. These failures have been often charged to the fact that classical tests consider only the two axisymmetric shapes (oblate versus prolate), while ellipticals are truly triaxial bodies. On the other hand, recent analyses indicate that the class of elliptical galaxies could be a mixture of (at least) two families having different morphology and dynamical behaviour: (i) a family of fast-rotating, disc-like ellipticals (discy); (ii) a family of slow-rotating, box-shaped ellipticals (boxy). In this paper we review the tests for instrinsic shapes of elliptical galaxies using data of better quality (CCD) with respect to previous applications. (author)

  14. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    Science.gov (United States)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  15. From global to local statistical shape priors novel methods to obtain accurate reconstruction results with a limited amount of training shapes

    CERN Document Server

    Last, Carsten

    2017-01-01

    This book proposes a new approach to handle the problem of limited training data. Common approaches to cope with this problem are to model the shape variability independently across predefined segments or to allow artificial shape variations that cannot be explained through the training data, both of which have their drawbacks. The approach presented uses a local shape prior in each element of the underlying data domain and couples all local shape priors via smoothness constraints. The book provides a sound mathematical foundation in order to embed this new shape prior formulation into the well-known variational image segmentation framework. The new segmentation approach so obtained allows accurate reconstruction of even complex object classes with only a few training shapes at hand.

  16. Social power and recognition of emotional prosody: High power is associated with lower recognition accuracy than low power.

    Science.gov (United States)

    Uskul, Ayse K; Paulmann, Silke; Weick, Mario

    2016-02-01

    Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).

  17. Attentional Selection for Object Recognition - A Gentle Way

    National Research Council Canada - National Science Library

    Walther, Dirk; Itti, Laurent; Riesenhuber, Maximilian; Poggio, Tomaso; Koch, Christof

    2002-01-01

    ...% at a high level is sufficient to recognize multiple objects. To determine the size and shape of the region to be modulated, a rough segmentation is performed, based on pre-attentive features already computed to guide attention. Testing with synthetic and natural stimuli demonstrates that our new approach to attentional selection for recognition yields encouraging results in addition to being biologically plausible.

  18. Enhanced associative memory for colour (but not shape or location) in synaesthesia.

    OpenAIRE

    Pritchard Jamie; Rothen Nicolas; Coolbear Daniel; Ward Jamie

    2013-01-01

    People with grapheme colour synaesthesia have been shown to have enhanced memory on a range of tasks using both stimuli that induce synaesthesia (e.g. words) and more surprisingly stimuli that do not (e.g. certain abstract visual stimuli). This study examines the latter by using multi featured stimuli consisting of shape colour and location conjunctions (e.g. shape A+colour A+location A; shape B+colour B+location B) presented in a recognition memory paradigm. This enables distractor items to ...

  19. Geometric morphometric evaluation of cervical vertebrae shape and its relationship to skeletal maturation.

    Science.gov (United States)

    Chatzigianni, Athina; Halazonetis, Demetrios J

    2009-10-01

    Cervical vertebrae shape has been proposed as a diagnostic factor for assessing skeletal maturation in orthodontic patients. However, evaluation of vertebral shape is mainly based on qualitative criteria. Comprehensive quantitative measurements of shape and assessments of its predictive power have not been reported. Our aims were to measure vertebral shape by using the tools of geometric morphometrics and to evaluate the correlation and predictive power of vertebral shape on skeletal maturation. Pretreatment lateral cephalograms and corresponding hand-wrist radiographs of 98 patients (40 boys, 58 girls; ages, 8.1-17.7 years) were used. Skeletal age was estimated from the hand-wrist radiographs. The first 4 vertebrae were traced, and 187 landmarks (34 fixed and 153 sliding semilandmarks) were used. Sliding semilandmarks were adjusted to minimize bending energy against the average of the sample. Principal components analysis in shape and form spaces was used for evaluating shape patterns. Shape measures, alone and combined with centroid size and age, were assessed as predictors of skeletal maturation. Shape alone could not predict skeletal maturation better than chronologic age. The best prediction was achieved with the combination of form space principal components and age, giving 90% prediction intervals of approximately 200 maturation units in the girls and 300 units in the boys. Similar predictive power could be obtained by using centroid size and age. Vertebrae C2, C3, and C4 gave similar results when examined individually or combined. C1 showed lower correlations, signifying lower integration with hand-wrist maturation. Vertebral shape is strongly correlated to skeletal age but does not offer better predictive value than chronologic age.

  20. Evaluation of Hand Stereognosis Level in 3-6 years Old Children with Spastic Hemiplegia and Diplegia

    Directory of Open Access Journals (Sweden)

    Minou Kalantari

    2013-07-01

    Full Text Available Objective: One of the most prevalent sensory problems in cerebral palsy is Astereognosis which has special importance in daily manual functions. The purpose of this study was to determine the level of hand stereognosis using common objects and geometric shapes in children with spastic hemiplegia and diplegia. Materials & Methods: In this cross sectional study, 20 children with cerebral palsy between 3-6 years old (9 males, 11 females with mean age (hemiplegya: 55months, diplegya: 57months were selected through nonrandomized convinience sampling referd to Occupational Therapy centers of Shahid Beheshti University of Medical Sciences. Stereognosis was evaluated using geometric shapes (square, circle, rectangular, triangle and common objects (pencil, key, coin, nail, teaspoon and screw and test special board. The data were analyzed by Mixed Analysis of Variance and Regression statistical tests. Results: There was no significant regression between common objects stereognosis score and age in hemiplegic childrenbut this regression was significant for stereognosis score of geometric shapes (P=0.027. There was no significant regression between stereognosis score of common objects and geometric shapes in diplegic children. The Main effects of gender was not significant in stereognosis of children with spastic hemiplegia and diplegia, also main effects of hand were not significant in two groups. Conclusion: There was no significant difference between stereognosis of affected and unaffected hand in hemiplegic childrenand between right and left hands in diplegic children. Also There was no significant regression between age and stereognosis score of geometric shapes in diplegic children .

  1. Speech recognition in individuals with sensorineural hearing loss

    Directory of Open Access Journals (Sweden)

    Adriana Neves de Andrade

    Full Text Available ABSTRACT INTRODUCTION: Hearing loss can negatively influence the communication performance of individuals, who should be evaluated with suitable material and in situations of listening close to those found in everyday life. OBJECTIVE: To analyze and compare the performance of patients with mild-to-moderate sensorineural hearing loss in speech recognition tests carried out in silence and with noise, according to the variables ear (right and left and type of stimulus presentation. METHODS: The study included 19 right-handed individuals with mild-to-moderate symmetrical bilateral sensorineural hearing loss, submitted to the speech recognition test with words in different modalities and speech test with white noise and pictures. RESULTS: There was no significant difference between right and left ears in any of the tests. The mean number of correct responses in the speech recognition test with pictures, live voice, and recorded monosyllables was 97.1%, 85.9%, and 76.1%, respectively, whereas after the introduction of noise, the performance decreased to 72.6% accuracy. CONCLUSIONS: The best performances in the Speech Recognition Percentage Index were obtained using monosyllabic stimuli, represented by pictures presented in silence, with no significant differences between the right and left ears. After the introduction of competitive noise, there was a decrease in individuals' performance.

  2. Automatic recognition of ship types from infrared images using superstructure moment invariants

    Science.gov (United States)

    Li, Heng; Wang, Xinyu

    2007-11-01

    Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.

  3. Lateral and medial ventral occipitotemporal regions interact during the recognition of images revealed from noise

    Directory of Open Access Journals (Sweden)

    Barbara eNordhjem

    2016-01-01

    Full Text Available Several studies suggest different functional roles for the medial and the lateral ventral sections in object recognition. Texture and surface information is processed in medial regions, while shape information is processed in lateral sections. This begs the question whether and how these functionally specialized sections interact with each other and with early visual cortex to facilitate object recognition. In the current research, we set out to answer this question. In an fMRI study, thirteen subjects viewed and recognized images of objects and animals that were gradually revealed from noise while their brains were being scanned. We applied dynamic causal modeling (DCM – a method to characterize network interactions – to determine the modulatory effect of object recognition on a network comprising the primary visual cortex (V1, the lingual gyrus (LG in medial ventral cortex and the lateral occipital cortex (LO. We found that object recognition modulated the bilateral connectivity between LG and LO. Moreover, the feed-forward connectivity from V1 to LG and LO was modulated, while there was no evidence for feedback from these regions to V1 during object recognition. In particular, the interaction between medial and lateral areas supports a framework in which visual recognition of objects is achieved by networked regions that integrate information on image statistics, scene content and shape – rather than by a single categorically specialized region – within the ventral visual cortex.

  4. Corticosterone and propranolol's role on taste recognition memory.

    Science.gov (United States)

    Ruetti, E; Justel, N; Mustaca, A; Boccia, M

    2014-12-01

    Taste recognition is a robust procedure to study learning and memory processes, as well as the different stages involved in them, i.e. encoding, storage and recall. Considerable evidence indicates that adrenal hormones and the noradrenergic system play an important role in aversive and appetitive memory formation in rats and humans. The present experiments were designed to characterize the effects of immediate post training corticosterone (Experiment 1) and propranolol administration (Experiment 2 and 3) on taste recognition memory. Administration of a high dose of corticosterone (5mg/kg, sc) impairs consolidation of taste memory, but the low and moderate doses (1 and 3mg/kg, sc) didn't affect it. On the other hand, immediate post-training administration of propranolol (1 and 2mg/kg, ip) impaired taste recognition memory. These effects were time-dependent since no effects were seen when drug administration was delayed 3h after training. These findings support the importance of stress hormones and noradrenergic system on the modulation of taste memory consolidation. Copyright © 2014. Published by Elsevier Inc.

  5. The robot hand illusion: inducing proprioceptive drift through visuo-motor congruency.

    Science.gov (United States)

    Romano, Daniele; Caffa, Elisa; Hernandez-Arieta, Alejandro; Brugger, Peter; Maravita, Angelo

    2015-04-01

    The representation of one's own body sets the border of the self, but also shapes the space where we interact with external objects. Under particular conditions, such as in the rubber hand illusion external objects can be incorporated in one's own body representation, following congruent visuo-tactile stroking of one's own and a fake hand. This procedure induces an illusory sense of ownership for the fake hand and a shift of proprioceptive localization of the own hand towards the fake hand. Here we investigated whether pure visuo-motor, instead of visuo-tactile, congruency between one's own hand and a detached myoelectric-controlled robotic hand can induce similar embodiment effects. We found a shift of proprioceptive hand localization toward the robot hand, only following synchronized real hand/robot hand movements. Notably, no modulation was found of the sense of ownership following either synchronous or asynchronous-movement training. Our findings suggest that visuo-motor synchrony can drive the localization of one's own body parts in space, even when somatosensory input is kept constant and the experience of body ownership is maintained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Development of remote handling system based on 3-D shape recognition technique

    International Nuclear Information System (INIS)

    Tomizuka, Chiaki; Takeuchi, Yutaka

    2006-01-01

    In a nuclear facility, the maintenance and repair activities must be done remotely in a radioactive environment. Fuji Electric Systems Co., Ltd. has developed a remote handling system based on 3-D recognition technique. The system recognizes the pose and position of the target to manipulate, and visualizes the scene with the target in 3-D, enabling an operator to handle it easily. This paper introduces the concept and the key features of this system. (author)

  7. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  8. Study on road sign recognition in LabVIEW

    Science.gov (United States)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2016-02-01

    Road and traffic sign identification is a field of study that can be used to aid the development of in-car advisory systems. It uses computer vision and artificial intelligence to extract the road signs from outdoor images acquired by a camera in uncontrolled lighting conditions where they may be occluded by other objects, or may suffer from problems such as color fading, disorientation, variations in shape and size, etc. An automatic means of identifying traffic signs, in these conditions, can make a significant contribution to develop an Intelligent Transport Systems (ITS) that continuously monitors the driver, the vehicle, and the road. Road and traffic signs are characterized by a number of features which make them recognizable from the environment. Road signs are located in standard positions and have standard shapes, standard colors, and known pictograms. These characteristics make them suitable for image identification. Traffic sign identification covers two problems: traffic sign detection and traffic sign recognition. Traffic sign detection is meant for the accurate localization of traffic signs in the image space, while traffic sign recognition handles the labeling of such detections into specific traffic sign types or subcategories [1].

  9. From movement to mechanism : exploring expressive movement qualities in shape-change

    NARCIS (Netherlands)

    Kwak, M.; Frens, J.W.

    2015-01-01

    This one-day studio revolves around the exploration of expressive movement qualities in shape-change by means of physical sketching and prototyping. It is a hands-on studio where participants first explore expressive movement qualities and interaction scenarios with a generic shape-changing platform

  10. Oriented active shape models.

    Science.gov (United States)

    Liu, Jiamin; Udupa, Jayaram K

    2009-04-01

    Active shape models (ASM) are widely employed for recognizing anatomic structures and for delineating them in medical images. In this paper, a novel strategy called oriented active shape models (OASM) is presented in an attempt to overcome the following five limitations of ASM: 1) lower delineation accuracy, 2) the requirement of a large number of landmarks, 3) sensitivity to search range, 4) sensitivity to initialization, and 5) inability to fully exploit the specific information present in the given image to be segmented. OASM effectively combines the rich statistical shape information embodied in ASM with the boundary orientedness property and the globally optimal delineation capability of the live wire methodology of boundary segmentation. The latter characteristics allow live wire to effectively separate an object boundary from other nonobject boundaries with similar properties especially when they come very close in the image domain. The approach leads to a two-level dynamic programming method, wherein the first level corresponds to boundary recognition and the second level corresponds to boundary delineation, and to an effective automatic initialization method. The method outputs a globally optimal boundary that agrees with the shape model if the recognition step is successful in bringing the model close to the boundary in the image. Extensive evaluation experiments have been conducted by utilizing 40 image (magnetic resonance and computed tomography) data sets in each of five different application areas for segmenting breast, liver, bones of the foot, and cervical vertebrae of the spine. Comparisons are made between OASM and ASM based on precision, accuracy, and efficiency of segmentation. Accuracy is assessed using both region-based false positive and false negative measures and boundary-based distance measures. The results indicate the following: 1) The accuracy of segmentation via OASM is considerably better than that of ASM; 2) The number of landmarks

  11. [Analysis and research of brain-computer interface experiments for imaging left-right hands movement].

    Science.gov (United States)

    Wu, Yazhou; He, Qinghua; Huang, Hua; Zhang, Ling; Zhuo, Yu; Xie, Qi; Wu, Baoming

    2008-10-01

    This is a research carried out to explore a pragmatic way of BCI based imaging movement, i. e. to extract the feature of EEG for reflecting different thinking by searching suitable methods of signal extraction and recognition algorithm processing, to boost the recognition rate of communication for BCI system, and finally to establish a substantial theory and experimental support for BCI application. In this paper, different mental tasks for imaging left-right hands movement from 6 subjects were studied in three different time sections (hint keying at 2s, 1s and 0s after appearance of arrow). Then we used wavelet analysis and Feed-forward Back-propagation Neural Network (BP-NN) method for processing and analyzing the experimental data of off-line. Delay time delta t2, delta t1 and delta t0 for all subjects in the three different time sections were analyzed. There was significant difference between delta to and delta t2 or delta t1 (P0.05). The average results of recognition rate were 65%, 86.67% and 72%, respectively. There were obviously different features for imaging left-right hands movement about 0.5-1s before actual movement; these features displayed significant difference. We got higher recognition rate of communication under the hint keying at about 1s after the appearance of arrow. These showed the feasibility of using the feature signals extracted from the project as the external control signals for BCI system, and demon strated that the project provided new ideas and methods for feature extraction and classification of mental tasks for BCI.

  12. Simple Ontology of Manipulation Actions based on Hand-Object Relations

    DEFF Research Database (Denmark)

    Wörgötter, Florentin; Aksoy, E. E.; Krüger, Norbert

    2013-01-01

    and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal...... and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study....

  13. Neural network recognition of mammographic lesions

    International Nuclear Information System (INIS)

    Oldham, W.J.B.; Downes, P.T.; Hunter, V.

    1987-01-01

    A method for recognition of mammographic lesions through the use of neural networks is presented. Neural networks have exhibited the ability to learn the shape andinternal structure of patterns. Digitized mammograms containing circumscribed and stelate lesions were used to train a feedfoward synchronous neural network that self-organizes to stable attractor states. Encoding of data for submission to the network was accomplished by performing a fractal analysis of the digitized image. This results in scale invariant representation of the lesions. Results are discussed

  14. Applications of shape analysis to domestic and international security

    International Nuclear Information System (INIS)

    Prasad, Lakshman; Skourikhine, Alexei N.; Doak, Justin E.

    2002-01-01

    The rapidly growing area of cooperative international security calls for pervasive deployment of smart sensors that render valuable information and reduce operational costs and errors. Among the sensors used, vision sensors are by far the most versatile, tangible, and rich in the information they provide about their environment. On the flip side, they are also the most complex to analyze automatically for the extraction of high-level information. The ability to process imagery in a useful manner requires at least partial functional emulation of human capabilities of visual understanding. Of all visual cues available in image data, shape is perhaps the most important for understanding the content of an image. In this paper we present an overview of ongoing research at LANL on geometric shape analysis. The objective of our research is to develop a computational framework for multiscale characterization, analysis, and recognition of shapes. This framework will enable the development of a comprehensive and connected body of mathematical methods and algorithms, based on the topological, metrical, and morphological properties of shapes. We discuss its potential applications to automated surveillance, monitoring, container tracking and inspection, weapons dismantlement, and treaty verification. The framework will develop a geometric filtering scheme for extracting semantically salient shape features. This effort creates a paradigm for solving shape-related problems in Pattern Recognition, Computer Vision, and Image Understanding in a conceptually cohesive and algorithmically amenable manner. The research aims to develop an advanced image analysis capability at LANL for solving a wide range of problems in automated facility surveillance, nuclear materials monitoring, treaty verification, and container inspection and tracking. The research provides the scientific underpinnings that will enable us to build smart surveillance cameras, with a direct practical impact on LANL

  15. The Ninapro database: A resource for sEMG naturally controlled robotic hand prosthetics.

    Science.gov (United States)

    Atzori, Manfredo; Muller, Henning

    2015-01-01

    The dexterous natural control of robotic prosthetic hands with non-invasive techniques is still a challenge: surface electromyography gives some control capabilities but these are limited, often not natural and require long training times; the application of pattern recognition techniques recently started to be applied in practice. While results in the scientific literature are promising they have to be improved to reach the real needs. The Ninapro database aims to improve the field of naturally controlled robotic hand prosthetics by permitting to worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark database. Currently, the Ninapro database includes data from 67 intact subjects and 11 amputated subject performing approximately 50 different movements. The data are aimed at permitting the study of the relationships between surface electromyography, kinematics and dynamics. The Ninapro acquisition protocol was created in order to be easy to be reproduced. Currently, the number of datasets included in the database is increasing thanks to the collaboration of several research groups.

  16. The impact of shoulder abduction loading on EMG-based intention detection of hand opening and closing after stroke.

    Science.gov (United States)

    Lan, Yiyun; Yao, Jun; Dewald, Julius P A

    2011-01-01

    Many stroke patients are subject to limited hand functions in the paretic arm due to a significant loss of Corticospinal Tract (CST) fibers. A possible solution for this problem is to classify surface Electromyography (EMG) signals generated by hand movements and uses that to implement Functional Electrical Stimulation (FES). However, EMG usually presents an abnormal muscle coactivation pattern shown as increased coupling between muscles within and/or across joints after stroke. The resulting Abnormal Muscle Synergies (AMS) could make the classification more difficult in individuals with stroke, especially when attempting to use the hand together with other joints in the paretic arm. Therefore, this study is aimed at identifying the impact of AMS following stroke on EMG pattern recognition between two hand movements. In an effort to achieve this goal, 7 chronic hemiparetic chronic stroke subjects were recruited and asked to perform hand opening and closing movements at their paretic arm while being either fully supported by a virtual table or loaded with 25% of subject's maximum shoulder abduction force. During the execution of motor tasks EMG signals from the wrist flexors and extensors were simultaneously acquired. Our results showed that increased synergy-induced activity at elbow flexors, induced by increasing shoulder abduction loading, deteriorated the performance of EMG pattern recognition for hand opening for those with a weak grasp strength and EMG activity. However, no such impact on hand closing has yet been observed possibly because finger/wrist flexion is facilitated by the shoulder abduction-induced flexion synergy.

  17. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  18. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  19. Micro flexible robot hand using electro-conjugate fluid

    Science.gov (United States)

    Ueno, S.; Takemura, K.; Yokota, S.; Edamura, K.

    2013-12-01

    An electro-conjugate fluid (ECF) is a kind of functional fluid, which produces a flow (ECF flow) when subjected to high DC voltage. Since it only requires a tiny electrode pair in micrometer size in order to generate the ECF flow, the ECF is a promising micro fluid pressure source. This study proposes a novel micro robot hand using the ECF. The robot hand is mainly composed of five flexible fingers and an ECF flow generator. The flexible finger is made of silicone rubber having several chambers in series along its axis. When the chambers are depressurized, the chambers deflate resulting in making the actuator bend. On the other hand, the ECF flow generator has a needle-ring electrode pair inside. When putting the ECF flow generator into the ECF and applying voltage of 6.0 kV to the electrode pair, we can obtain the pressure of 33.1 kPa. Using the components mentioned above, we developed the ECF robot hand. The height, the width and the mass of the robot hand are 45 mm, 40 mm and 5.2 g, respectively. Since the actuator is flexible, the robot hand can grasp various objects with various shapes without complex controller.

  20. Hand Motion-Based Remote Control Interface with Vibrotactile Feedback for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2013-06-01

    Full Text Available This paper presents the design and implementation of a hand-held interface system for the locomotion control of home robots. A handheld controller is proposed to implement hand motion recognition and hand motion-based robot control. The handheld controller can provide a ‘connect-and-play’ service for the users to control the home robot with visual and vibrotactile feedback. Six natural hand gestures are defined for navigating the home robots. A three-axis accelerometer is used to detect the hand motions of the user. The recorded acceleration data are analysed and classified to corresponding control commands according to their characteristic curves. A vibration motor is used to provide vibrotactile feedback to the user when an improper operation is performed. The performances of the proposed hand motion-based interface and the traditional keyboard and mouse interface have been compared in robot navigation experiments. The experimental results of home robot navigation show that the success rate of the handheld controller is 13.33% higher than the PC based controller. The precision of the handheld controller is 15.4% more than that of the PC and the execution time is 24.7% less than the PC based controller. This means that the proposed hand motion-based interface is more efficient and flexible.

  1. A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition.

    Science.gov (United States)

    Ciotti, Simone; Battaglia, Edoardo; Carbonaro, Nicola; Bicchi, Antonio; Tognetti, Alessandro; Bianchi, Matteo

    2016-06-02

    Achieving accurate and reliable kinematic hand pose reconstructions represents a challenging task. The main reason for this is the complexity of hand biomechanics, where several degrees of freedom are distributed along a continuous deformable structure. Wearable sensing can represent a viable solution to tackle this issue, since it enables a more natural kinematic monitoring. However, the intrinsic accuracy (as well as the number of sensing elements) of wearable hand pose reconstruction (HPR) systems can be severely limited by ergonomics and cost considerations. In this paper, we combined the theoretical foundations of the optimal design of HPR devices based on hand synergy information, i.e., the inter-joint covariation patterns, with textile goniometers based on knitted piezoresistive fabrics (KPF) technology, to develop, for the first time, an optimally-designed under-sensed glove for measuring hand kinematics. We used only five sensors optimally placed on the hand and completed hand pose reconstruction (described according to a kinematic model with 19 degrees of freedom) leveraging upon synergistic information. The reconstructions we obtained from five different subjects were used to implement an unsupervised method for the recognition of eight functional grasps, showing a high degree of accuracy and robustness.

  2. A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition

    Directory of Open Access Journals (Sweden)

    Simone Ciotti

    2016-06-01

    Full Text Available Achieving accurate and reliable kinematic hand pose reconstructions represents a challenging task. The main reason for this is the complexity of hand biomechanics, where several degrees of freedom are distributed along a continuous deformable structure. Wearable sensing can represent a viable solution to tackle this issue, since it enables a more natural kinematic monitoring. However, the intrinsic accuracy (as well as the number of sensing elements of wearable hand pose reconstruction (HPR systems can be severely limited by ergonomics and cost considerations. In this paper, we combined the theoretical foundations of the optimal design of HPR devices based on hand synergy information, i.e., the inter-joint covariation patterns, with textile goniometers based on knitted piezoresistive fabrics (KPF technology, to develop, for the first time, an optimally-designed under-sensed glove for measuring hand kinematics. We used only five sensors optimally placed on the hand and completed hand pose reconstruction (described according to a kinematic model with 19 degrees of freedom leveraging upon synergistic information. The reconstructions we obtained from five different subjects were used to implement an unsupervised method for the recognition of eight functional grasps, showing a high degree of accuracy and robustness.

  3. Clustering of Farsi sub-word images for whole-book recognition

    Science.gov (United States)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  4. The what, when, where, and how of visual word recognition.

    Science.gov (United States)

    Carreiras, Manuel; Armstrong, Blair C; Perea, Manuel; Frost, Ram

    2014-02-01

    A long-standing debate in reading research is whether printed words are perceived in a feedforward manner on the basis of orthographic information, with other representations such as semantics and phonology activated subsequently, or whether the system is fully interactive and feedback from these representations shapes early visual word recognition. We review recent evidence from behavioral, functional magnetic resonance imaging, electroencephalography, magnetoencephalography, and biologically plausible connectionist modeling approaches, focusing on how each approach provides insight into the temporal flow of information in the lexical system. We conclude that, consistent with interactive accounts, higher-order linguistic representations modulate early orthographic processing. We also discuss how biologically plausible interactive frameworks and coordinated empirical and computational work can advance theories of visual word recognition and other domains (e.g., object recognition). Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Application of 3D Zernike descriptors to shape-based ligand similarity searching.

    Science.gov (United States)

    Venkatraman, Vishwesh; Chakravarthy, Padmasini Ramji; Kihara, Daisuke

    2009-12-17

    The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD) for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR) and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  6. Three RNA recognition motifs participate in RNA recognition and structural organization by the pro-apoptotic factor TIA-1

    Science.gov (United States)

    Bauer, William J.; Heath, Jason; Jenkins, Jermaine L.; Kielkopf, Clara L.

    2012-01-01

    T-cell intracellular antigen-1 (TIA-1) regulates developmental and stress-responsive pathways through distinct activities at the levels of alternative pre-mRNA splicing and mRNA translation. The TIA-1 polypeptide contains three RNA recognition motifs (RRMs). The central RRM2 and C-terminal RRM3 associate with cellular mRNAs. The N-terminal RRM1 enhances interactions of a C-terminal Q-rich domain of TIA-1 with the U1-C splicing factor, despite linear separation of the domains in the TIA-1 sequence. Given the expanded functional repertoire of the RRM family, it was unknown whether TIA-1 RRM1 contributes to RNA binding as well as documented protein interactions. To address this question, we used isothermal titration calorimetry and small-angle X-ray scattering (SAXS) to dissect the roles of the TIA-1 RRMs in RNA recognition. Notably, the fas RNA exhibited two binding sites with indistinguishable affinities for TIA-1. Analyses of TIA-1 variants established that RRM1 was dispensable for binding AU-rich fas sites, yet all three RRMs were required to bind a polyU RNA with high affinity. SAXS analyses demonstrated a `V' shape for a TIA-1 construct comprising the three RRMs, and revealed that its dimensions became more compact in the RNA-bound state. The sequence-selective involvement of TIA-1 RRM1 in RNA recognition suggests a possible role for RNA sequences in regulating the distinct functions of TIA-1. Further implications for U1-C recruitment by the adjacent TIA-1 binding sites of the fas pre-mRNA and the bent TIA-1 shape, which organizes the N- and C-termini on the same side of the protein, are discussed. PMID:22154808

  7. Recognition of human gait in oblique and frontal views using Kinect ...

    African Journals Online (AJOL)

    This study describes the recognition of human gait in the oblique and frontal views using novel gait features derived from the skeleton joints provided by Kinect. In D-joint, the skeleton joints were extracted directly from the Kinect, which generates the gait feature. On the other hand, H-joint distance is a feature of distance ...

  8. 8 CFR 1292.2 - Organizations qualified for recognition; requests for recognition; withdrawal of recognition...

    Science.gov (United States)

    2010-01-01

    ...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. 1292.2...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. (a) Qualifications of organizations. A non-profit religious, charitable, social service, or similar organization...

  9. Structural determinants for selective recognition of peptide ligands for endothelin receptor subtypes ETA and ETB.

    Science.gov (United States)

    Lättig, Jens; Oksche, Alexander; Beyermann, Michael; Rosenthal, Walter; Krause, Gerd

    2009-07-01

    The molecular basis for recognition of peptide ligands endothelin-1, -2 and -3 in endothelin receptors is poorly understood. Especially the origin of ligand selectivity for ET(A) or ET(B) is not clearly resolved. We derived sequence-structure-function relationships of peptides and receptors from mutational data and homology modeling. Our major findings are the dissection of peptide ligands into four epitopes and the delineation of four complementary structural portions on receptor side explaining ligand recognition in both endothelin receptor subtypes. In addition, structural determinants for ligand selectivity could be described. As a result, we could improve the selectivity of BQ3020 about 10-fold by a single amino acid substitution, validating our hypothesis for ligand selectivity caused by different entrances to the receptors' transmembrane binding sites. A narrow tunnel shape in ET(A) is restrictive for a selected group of peptide ligands' N-termini, whereas a broad funnel-shaped entrance in ET(B) accepts a variety of different shapes and properties of ligands.

  10. Repeatability of grasp recognition for robotic hand prosthesis control based on sEMG data.

    Science.gov (United States)

    Palermo, Francesca; Cognolato, Matteo; Gijsberts, Arjan; Muller, Henning; Caputo, Barbara; Atzori, Manfredo

    2017-07-01

    Control methods based on sEMG obtained promising results for hand prosthetics. Control system robustness is still often inadequate and does not allow the amputees to perform a large number of movements useful for everyday life. Only few studies analyzed the repeatability of sEMG classification of hand grasps. The main goals of this paper are to explore repeatability in sEMG data and to release a repeatability database with the recorded experiments. The data are recorded from 10 intact subjects repeating 7 grasps 12 times, twice a day for 5 days. The data are publicly available on the Ninapro web page. The analysis for the repeatability is based on the comparison of movement classification accuracy in several data acquisitions and for different subjects. The analysis is performed using mean absolute value and waveform length features and a Random Forest classifier. The accuracy obtained by training and testing on acquisitions at different times is on average 27.03% lower than training and testing on the same acquisition. The results obtained by training and testing on different acquisitions suggest that previous acquisitions can be used to train the classification algorithms. The inter-subject variability is remarkable, suggesting that specific characteristics of the subjects can affect repeatibility and sEMG classification accuracy. In conclusion, the results of this paper can contribute to develop more robust control systems for hand prostheses, while the presented data allows researchers to test repeatability in further analyses.

  11. Threshold models of recognition and the recognition heuristic

    Directory of Open Access Journals (Sweden)

    Edgar Erdfelder

    2011-02-01

    Full Text Available According to the recognition heuristic (RH theory, decisions follow the recognition principle: Given a high validity of the recognition cue, people should prefer recognized choice options compared to unrecognized ones. Assuming that the memory strength of choice options is strongly correlated with both the choice criterion and recognition judgments, the RH is a reasonable strategy that approximates optimal decisions with a minimum of cognitive effort (Davis-Stober, Dana, and Budescu, 2010. However, theories of recognition memory are not generally compatible with this assumption. For example, some threshold models of recognition presume that recognition judgments can arise from two types of cognitive states: (1 certainty states in which judgments are almost perfectly correlated with memory strength and (2 uncertainty states in which recognition judgments reflect guessing rather than differences in memory strength. We report an experiment designed to test the prediction that the RH applies to certainty states only. Our results show that memory states rather than recognition judgments affect use of recognition information in binary decisions.

  12. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  13. A Demonstration of Improved Precision of Word Recognition Scores

    Science.gov (United States)

    Schlauch, Robert S.; Anderson, Elizabeth S.; Micheyl, Christophe

    2014-01-01

    Purpose: The purpose of this study was to demonstrate improved precision of word recognition scores (WRSs) by increasing list length and analyzing phonemic errors. Method: Pure-tone thresholds (frequencies between 0.25 and 8.0 kHz) and WRSs were measured in 3 levels of speech-shaped noise (50, 52, and 54 dB HL) for 24 listeners with normal…

  14. Automatic shape recognition of a fast transient signal

    International Nuclear Information System (INIS)

    Charles, Gilbert.

    1976-01-01

    A system was developed to recognize if the shape of a signal x(t) is similar (or identical) to the one of an element yi(t) of an ensemble S composed by N known signals, that are memorised. x(t) is a time limited T 2 ) give the similarity measure of two signals. To solve the problem of the digital recording of the signals x(t) two devices were realized: a digital-to-analog converter which permits the recording of fast transient signals (band pass>1GHz, sampling-frequency approximately 100GHz, resolution: 9 bits, 576 samples); an automatic attenuator which scales the signal x(t) before the digitalization (the band pass is 70MHz at -1dB). A theoretical analysis permits to determine what must be the resolution of the digital-to-analog converter as a fonction of the signal-caracteristics and of the wanted precision for the calculus of rho 2 [fr

  15. Speech recognition in individuals with sensorineural hearing loss.

    Science.gov (United States)

    de Andrade, Adriana Neves; Iorio, Maria Cecilia Martinelli; Gil, Daniela

    2016-01-01

    Hearing loss can negatively influence the communication performance of individuals, who should be evaluated with suitable material and in situations of listening close to those found in everyday life. To analyze and compare the performance of patients with mild-to-moderate sensorineural hearing loss in speech recognition tests carried out in silence and with noise, according to the variables ear (right and left) and type of stimulus presentation. The study included 19 right-handed individuals with mild-to-moderate symmetrical bilateral sensorineural hearing loss, submitted to the speech recognition test with words in different modalities and speech test with white noise and pictures. There was no significant difference between right and left ears in any of the tests. The mean number of correct responses in the speech recognition test with pictures, live voice, and recorded monosyllables was 97.1%, 85.9%, and 76.1%, respectively, whereas after the introduction of noise, the performance decreased to 72.6% accuracy. The best performances in the Speech Recognition Percentage Index were obtained using monosyllabic stimuli, represented by pictures presented in silence, with no significant differences between the right and left ears. After the introduction of competitive noise, there was a decrease in individuals' performance. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  16. Dysfunctional role of parietal lobe during self-face recognition in schizophrenia.

    Science.gov (United States)

    Yun, Je-Yeon; Hur, Ji-Won; Jung, Wi Hoon; Jang, Joon Hwan; Youn, Tak; Kang, Do-Hyung; Park, Sohee; Kwon, Jun Soo

    2014-01-01

    Anomalous sense of self is central to schizophrenia yet difficult to demonstrate empirically. The present study examined the effective neural network connectivity underlying self-face recognition in patients with schizophrenia (SZ) using [15O]H2O Positron Emission Tomography (PET) and Structural Equation Modeling. Eight SZ and eight age-matched healthy controls (CO) underwent six consecutive [15O]H2O PET scans during self-face (SF) and famous face (FF) recognition blocks, each of which was repeated three times. There were no behavioral performance differences between the SF and FF blocks in SZ. Moreover, voxel-based analyses of data from SZ revealed no significant differences in the regional cerebral blood flow (rCBF) levels between the SF and FF recognition conditions. Further effective connectivity analyses for SZ also showed a similar pattern of effective connectivity network across the SF and FF recognition. On the other hand, comparison of SF recognition effective connectivity network between SZ and CO demonstrated significantly attenuated effective connectivity strength not only between the right supramarginal gyrus and left inferior temporal gyrus, but also between the cuneus and right medial prefrontal cortex in SZ. These findings support a conceptual model that posits a causal relationship between disrupted self-other discrimination and attenuated effective connectivity among the right supramarginal gyrus, cuneus, and prefronto-temporal brain areas involved in the SF recognition network of SZ. © 2013.

  17. Automated recognition system for ELM classification in JET

    International Nuclear Information System (INIS)

    Duro, N.; Dormido, R.; Vega, J.; Dormido-Canto, S.; Farias, G.; Sanchez, J.; Vargas, H.; Murari, A.

    2009-01-01

    Edge localized modes (ELMs) are instabilities occurring in the edge of H-mode plasmas. Considerable efforts are being devoted to understanding the physics behind this non-linear phenomenon. A first characterization of ELMs is usually their identification as type I or type III. An automated pattern recognition system has been developed in JET for off-line ELM recognition and classification. The empirical method presented in this paper analyzes each individual ELM instead of starting from a temporal segment containing many ELM bursts. The ELM recognition and isolation is carried out using three signals: Dα, line integrated electron density and stored diamagnetic energy. A reduced set of characteristics (such as diamagnetic energy drop, ELM period or Dα shape) has been extracted to build supervised and unsupervised learning systems for classification purposes. The former are based on support vector machines (SVM). The latter have been developed with hierarchical and K-means clustering methods. The success rate of the classification systems is about 98% for a database of almost 300 ELMs.

  18. Effects of anticaricaturing vs. caricaturing and their neural correlates elucidate a role of shape for face learning.

    Science.gov (United States)

    Schulz, Claudia; Kaufmann, Jürgen M; Walther, Lydia; Schweinberger, Stefan R

    2012-08-01

    To assess the role of shape information for unfamiliar face learning, we investigated effects of photorealistic spatial anticaricaturing and caricaturing on later face recognition. We assessed behavioural performance and event-related brain potential (ERP) correlates of recognition, using different images of anticaricatures, veridical faces, or caricatures at learning and test. Relative to veridical faces, recognition performance improved for caricatures, with performance decrements for anticaricatures in response times. During learning, an amplitude pattern with caricatures>veridicals=anticaricatures was seen for N170, left-hemispheric ERP negativity during the P200 and N250 time segments (200-380 ms), and for a late positive component (LPC, 430-830 ms), whereas P200 and N250 responses exhibited an additional difference between veridicals and anticaricatures over the right hemisphere. During recognition, larger amplitudes for caricatures again started in the N170, whereas the P200 and the right-hemispheric N250 exhibited a more graded pattern of amplitude effects (caricatures>veridicals>anticaricatures), a result which was specific to learned but not novel faces in the N250. Together, the results (i) emphasise the role of facial shape for visual encoding in the learning of previously unfamiliar faces and (ii) provide important information about the neuronal timing of the encoding advantage enjoyed by faces with distinctive shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. IMPORTANCE OF HAND HYGIENE AS HOSPITAL INFECTION PROPHYLAXIS BY HEALTH PROFESSIONALS

    Directory of Open Access Journals (Sweden)

    Elaine C. de Souza

    2013-12-01

    Full Text Available Hospital infections currently represent the interests of the international order, as it involves the performance of health professionals, the quality of physical facilities and materials of daily use. This study aimed to verify the recognition of the importance of hand hygiene in preventing nosocomial infection by health professionals. This is a cross-sectional study, which was applied in semi-structured form, a sample of 60 professionals, including nurses, technicians and / or licensed practical nurses and doctors, working in a public hospital located in Bahia, in October 2011, complying with ethical requirements for studies in humans. The results showed that 98.3% of respondents recognize the importance of hand hygiene in preventing hospital infection, 83.3% said they master the technique, however only 53.4% described it correctly. We conclude that despite the awareness of professionals about the importance and availability of products for hand hygiene, it is necessary to implement educational activities that motivate and intensify the adhesion of professionals

  20. СHIRAL RECOGNITION OF CYSTEINE MOLECULES BY CHIRAL CdSe AND CdS QUANTUM DOTS

    Directory of Open Access Journals (Sweden)

    M. V. Mukhina

    2015-11-01

    Full Text Available Here, we report the investigation of mechanism of chiral molecular recognition of cysteine biomolecules by chiral CdSe and CdS semiconductor nanocrystals. To observe chiral recognition process, we prepared enantioenriched ensembles of the nanocrystals capped with achiral ligand. The enantioenriched samples of intrinsically chiral CdSe quantum dots were prepared by separation of initial racemic mixture of the nanocrystals using chiral phase transfer from chloroform to water driven by L- and D-cysteine. Chiral molecules of cysteine and penicillamine were substituted for achiral molecules of dodecanethiol on the surfaces of CdSe and CdS samples, respectively, via reverse phase transfer from water to chloroform. We estimated an efficiency of the hetero- (d-L or l-D and homocomplexes (l-L formation by comparing the extents of corresponding complexing reactions. Using circular dichroism spectroscopy data we show an ability of nanocrystals enantiomers to discriminate between left-handed and right-handed enantiomers of biomolecules via preferential formation of heterocomplexes. Development of approaches for obtaining chiral nanocrystals via chiral phase transfer offers opportunities for investigation of molecular recognition at the nano/bio interfaces.

  1. Superlattices assembled through shape-induced directional binding

    Science.gov (United States)

    Lu, Fang; Yager, Kevin G.; Zhang, Yugang; Xin, Huolin; Gang, Oleg

    2015-04-01

    Organization of spherical particles into lattices is typically driven by packing considerations. Although the addition of directional binding can significantly broaden structural diversity, nanoscale implementation remains challenging. Here we investigate the assembly of clusters and lattices in which anisotropic polyhedral blocks coordinate isotropic spherical nanoparticles via shape-induced directional interactions facilitated by DNA recognition. We show that these polyhedral blocks--cubes and octahedrons--when mixed with spheres, promote the assembly of clusters with architecture determined by polyhedron symmetry. Moreover, three-dimensional binary superlattices are formed when DNA shells accommodate the shape disparity between nanoparticle interfaces. The crystallographic symmetry of assembled lattices is determined by the spatial symmetry of the block's facets, while structural order depends on DNA-tuned interactions and particle size ratio. The presented lattice assembly strategy, exploiting shape for defining the global structure and DNA-mediation locally, opens novel possibilities for by-design fabrication of binary lattices.

  2. Advances in biometrics for secure human authentication and recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Sing, Jamuna Kanta

    2013-01-01

    GENERAL BIOMETRICSSecurity and Reliability Assessment for Biometric Systems; Gayatri MirajkarReview of Human Recognition Based on Retinal Images; Amin DehghaniADVANCED TOPICS IN BIOMETRICSVisual Speech as Behavioral Biometric; Preety Singh, Vijay Laxmi, and Manoj Singh GaurHuman Gait Signature for Biometric Authentication; Vijay JohnHand-Based Biometric for Personal Identification Using Correlation Filter Classifier; Mohammed Saigaa , Abdallah Meraoumia , Salim Chitroub, and Ahmed BouridaneOn Deciding the Dynamic Periocular Boundary for Human Recognition; Sambit Bakshi , Pankaj Kumar Sa, and Banshidhar MajhiRetention of Electrocardiogram Features Insignificantly Devalorized as an Effect of Watermarking for a Multimodal Biometric Authentication System; Nilanjan Dey, Bijurika Nandi, Poulami Das, Achintya Das, and Sheli Sinha ChaudhuriFacial Feature Point Extraction for Object Identification Using Discrete Contourlet Transform and Principal Component Analysis; N. G. Chitaliya and A. I. TrivediCASE STUDIES AND LA...

  3. [Evaluation of preparation of curved root canals using hand-used ProTaper].

    Science.gov (United States)

    Nie, Min; Zhao, Xin-Chen; Peng, Bin; Fan, Ming-Wen; Bian, Zhuan

    2009-05-01

    To evaluate the shaping ability of hand-used ProTaper on curved canals using Endodontic Cube. Fifty-four curved root canals in vitro were selected and divided into three groups according to the curved degree (alpha), group A: 0 degrees hand-used ProTaper. Before and after shaping, photograph of all the sections were taken under a stereomicroscope. Statistical analyses were performed. The dentin cutting quantity of the whole canal prepared with ProTaper in group B and C was larger than that of group A. The deviation distance of the whole canal prepared by ProTaper in group C was significantly larger than that in group A, and the deviation distance in middle portion larger than that in group B. The maintaining ability in the middle portion of group C by ProTaper was worse than that of group A and B. The curvature of root canal may increase the cutting quantity of the -dentin and reduce the ability of remaining original canal shape prepared by ProTaper.

  4. Shape Distributions of Nonlinear Dynamical Systems for Video-Based Inference.

    Science.gov (United States)

    Venkataraman, Vinay; Turaga, Pavan

    2016-12-01

    This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.

  5. Visual and visuomotor processing of hands and tools as a case study of cross talk between the dorsal and ventral streams.

    Science.gov (United States)

    Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão

    2018-05-24

    A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.

  6. Importance of the Sequence-Directed DNA Shape for Specific Binding Site Recognition by the Estrogen-Related Receptor

    Directory of Open Access Journals (Sweden)

    Kareem Mohideen-Abdul

    2017-06-01

    Full Text Available Most nuclear receptors (NRs bind DNA as dimers, either as hetero- or as homodimers on DNA sequences organized as two half-sites with specific orientation and spacing. The dimerization of NRs on their cognate response elements (REs involves specific protein–DNA and protein–protein interactions. The estrogen-related receptor (ERR belongs to the steroid hormone nuclear receptor (SHR family and shares strong similarity in its DNA-binding domain (DBD with that of the estrogen receptor (ER. In vitro, ERR binds with high affinity inverted repeat REs with a 3-bps spacing (IR3, but in vivo, it preferentially binds to single half-site REs extended at the 5′-end by 3 bp [estrogen-related response element (ERREs], thus explaining why ERR was often inferred as a purely monomeric receptor. Since its C-terminal ligand-binding domain is known to homodimerize with a strong dimer interface, we investigated the binding behavior of the isolated DBDs to different REs using electrophoretic migration, multi-angle static laser light scattering (MALLS, non-denaturing mass spectrometry, and nuclear magnetic resonance. In contrast to ER DBD, ERR DBD binds as a monomer to EREs (IR3, such as the tff1 ERE-IR3, but we identified a DNA sequence composed of an extended half-site embedded within an IR3 element (embedded ERRE/IR3, where stable dimer binding is observed. Using a series of chimera and mutant DNA sequences of ERREs and IR3 REs, we have found the key determinants for the binding of ERR DBD as a dimer. Our results suggest that the sequence-directed DNA shape is more important than the exact nucleotide sequence for the binding of ERR DBD to DNA as a dimer. Our work underlines the importance of the shape-driven DNA readout mechanisms based on minor groove recognition and electrostatic potential. These conclusions may apply not only to ERR but also to other members of the SHR family, such as androgen or glucocorticoid, for which a strong well-conserved half

  7. Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.

    Science.gov (United States)

    Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J

    1997-01-01

    A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.

  8. Depth-based human activity recognition: A comparative perspective study on feature extraction

    Directory of Open Access Journals (Sweden)

    Heba Hamdy Ali

    2018-06-01

    Full Text Available Depth Maps-based Human Activity Recognition is the process of categorizing depth sequences with a particular activity. In this problem, some applications represent robust solutions in domains such as surveillance system, computer vision applications, and video retrieval systems. The task is challenging due to variations inside one class and distinguishes between activities of various classes and video recording settings. In this study, we introduce a detailed study of current advances in the depth maps-based image representations and feature extraction process. Moreover, we discuss the state of art datasets and subsequent classification procedure. Also, a comparative study of some of the more popular depth-map approaches has provided in greater detail. The proposed methods are evaluated on three depth-based datasets “MSR Action 3D”, “MSR Hand Gesture”, and “MSR Daily Activity 3D”. Experimental results achieved 100%, 95.83%, and 96.55% respectively. While combining depth and color features on “RGBD-HuDaAct” Dataset, achieved 89.1%. Keywords: Activity recognition, Depth, Feature extraction, Video, Human body detection, Hand gesture

  9. Automatic gang graffiti recognition and interpretation

    Science.gov (United States)

    Parra, Albert; Boutin, Mireille; Delp, Edward J.

    2017-09-01

    One of the roles of emergency first responders (e.g., police and fire departments) is to prevent and protect against events that can jeopardize the safety and well-being of a community. In the case of criminal gang activity, tools are needed for finding, documenting, and taking the necessary actions to mitigate the problem or issue. We describe an integrated mobile-based system capable of using location-based services, combined with image analysis, to track and analyze gang activity through the acquisition, indexing, and recognition of gang graffiti images. This approach uses image analysis methods for color recognition, image segmentation, and image retrieval and classification. A database of gang graffiti images is described that includes not only the images but also metadata related to the images, such as date and time, geoposition, gang, gang member, colors, and symbols. The user can then query the data in a useful manner. We have implemented these features both as applications for Android and iOS hand-held devices and as a web-based interface.

  10. Influence of 3D particle shape on the mechanical behaviour through a novel characterization method

    Directory of Open Access Journals (Sweden)

    Ouhbi Noura

    2017-01-01

    Full Text Available The sensitivity of the mechanical behaviour of railway ballast to particle shape variation is studied through Discrete Element Method (DEM numerical simulations, focusing on some basic parameters such as solid fraction, coordination number, or force distribution. We present an innovative method to characterize 3D particle shape using Proper Orthogonal Decomposition (POD of scanned ballast grains with a high accuracy. The method enables not only shape characterization but also the generation of 3D distinct and angular shapes. Algorithms are designed for face and edge recognition.

  11. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-10-01

    Full Text Available Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD, is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor based on the observations of the researchers about the difference between real (live and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR camera-based finger-vein recognition system using convolutional neural network (CNN to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA for dimensionality reduction of feature space and support vector machine (SVM for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared

  12. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    Science.gov (United States)

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN

  13. Postural Hand Synergies during Environmental Constraint Exploitation

    Directory of Open Access Journals (Sweden)

    Cosimo Della Santina

    2017-08-01

    Full Text Available Humans are able to intuitively exploit the shape of an object and environmental constraints to achieve stable grasps and perform dexterous manipulations. In doing that, a vast range of kinematic strategies can be observed. However, in this work we formulate the hypothesis that such ability can be described in terms of a synergistic behavior in the generation of hand postures, i.e., using a reduced set of commonly used kinematic patterns. This is in analogy with previous studies showing the presence of such behavior in different tasks, such as grasping. We investigated this hypothesis in experiments performed by six subjects, who were asked to grasp objects from a flat surface. We quantitatively characterized hand posture behavior from a kinematic perspective, i.e., the hand joint angles, in both pre-shaping and during the interaction with the environment. To determine the role of tactile feedback, we repeated the same experiments but with subjects wearing a rigid shell on the fingertips to reduce cutaneous afferent inputs. Results show the persistence of at least two postural synergies in all the considered experimental conditions and phases. Tactile impairment does not alter significantly the first two synergies, and contact with the environment generates a change only for higher order Principal Components. A good match also arises between the first synergy found in our analysis and the first synergy of grasping as quantified by previous work. The present study is motivated by the interest of learning from the human example, extracting lessons that can be applied in robot design and control. Thus, we conclude with a discussion on implications for robotics of our findings.

  14. Basic perceptual changes that alter meaning and neural correlates of recognition memory

    Directory of Open Access Journals (Sweden)

    Chuanji eGao

    2015-02-01

    Full Text Available It is difficult to pinpoint the border between perceptual and conceptual processing, despite their treatment as distinct entities in many studies of recognition memory. For instance, alteration of simple perceptual characteristics of a stimulus can radically change meaning, such as the color of bread changing from white to green. We sought to better understand the role of perceptual and conceptual processing in memory by identifying the effects of changing a basic perceptual feature (color on behavioral and neural correlates of memory in circumstances when this change would be expected to either change the meaning of a stimulus or to have no effect on meaning (i.e., to influence conceptual processing or not. Abstract visual shapes (squiggles were colorized during study and presented during test in either the same color or a different color. Those squiggles that subjects found to resemble meaningful objects supported behavioral measures of conceptual priming, whereas meaningless squiggles did not. Further, changing color from study to test had a selective effect on behavioral correlates of priming for meaningful squiggles, indicating that color change altered conceptual processing. During a recognition memory test, color change altered event-related brain potential correlates of memory for meaningful squiggles but not for meaningless squiggles. Specifically, color change reduced the amplitude of frontally distributed N400 potentials (FN400, indicating that these potentials indicated conceptual processing during recognition memory that was sensitive to color change. In contrast, color change had no effect on FN400 correlates of recognition for meaningless squiggles, which were overall smaller in amplitude than for meaningful squiggles (further indicating that these potentials signal conceptual processing during recognition. Thus, merely changing the color of abstract visual shapes can alter their meaning, changing behavioral and neural correlates

  15. Basic perceptual changes that alter meaning and neural correlates of recognition memory.

    Science.gov (United States)

    Gao, Chuanji; Hermiller, Molly S; Voss, Joel L; Guo, Chunyan

    2015-01-01

    It is difficult to pinpoint the border between perceptual and conceptual processing, despite their treatment as distinct entities in many studies of recognition memory. For instance, alteration of simple perceptual characteristics of a stimulus can radically change meaning, such as the color of bread changing from white to green. We sought to better understand the role of perceptual and conceptual processing in memory by identifying the effects of changing a basic perceptual feature (color) on behavioral and neural correlates of memory in circumstances when this change would be expected to either change the meaning of a stimulus or to have no effect on meaning (i.e., to influence conceptual processing or not). Abstract visual shapes ("squiggles") were colorized during study and presented during test in either the same color or a different color. Those squiggles that subjects found to resemble meaningful objects supported behavioral measures of conceptual priming, whereas meaningless squiggles did not. Further, changing color from study to test had a selective effect on behavioral correlates of priming for meaningful squiggles, indicating that color change altered conceptual processing. During a recognition memory test, color change altered event-related brain potential (ERP) correlates of memory for meaningful squiggles but not for meaningless squiggles. Specifically, color change reduced the amplitude of frontally distributed N400 potentials (FN400), implying that these potentials indicated conceptual processing during recognition memory that was sensitive to color change. In contrast, color change had no effect on FN400 correlates of recognition for meaningless squiggles, which were overall smaller in amplitude than for meaningful squiggles (further indicating that these potentials signal conceptual processing during recognition). Thus, merely changing the color of abstract visual shapes can alter their meaning, changing behavioral and neural correlates of memory

  16. Application of 3D Zernike descriptors to shape-based ligand similarity searching

    Directory of Open Access Journals (Sweden)

    Venkatraman Vishwesh

    2009-12-01

    Full Text Available Abstract Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  17. Use of pattern recognition and neural networks for non-metric sex diagnosis from lateral shape of calvarium: an innovative model for computer-aided diagnosis in forensic and physical anthropology.

    Science.gov (United States)

    Cavalli, Fabio; Lusnig, Luca; Trentin, Edmondo

    2017-05-01

    Sex determination on skeletal remains is one of the most important diagnosis in forensic cases and in demographic studies on ancient populations. Our purpose is to realize an automatic operator-independent method to determine the sex from the bone shape and to test an intelligent, automatic pattern recognition system in an anthropological domain. Our multiple-classifier system is based exclusively on the morphological variants of a curve that represents the sagittal profile of the calvarium, modeled via artificial neural networks, and yields an accuracy higher than 80 %. The application of this system to other bone profiles is expected to further improve the sensibility of the methodology.

  18. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  19. Control System Design of the YWZ Multi-Fingered Dexterous Hand

    Directory of Open Access Journals (Sweden)

    Wenzhen Yang

    2012-07-01

    Full Text Available The manipulation abilities of a multi-fingered dexterous hand, such as motion in real-time, flexibility, grasp stability etc., are largely dependent on its control system. This paper developed a control system for the YWZ dexterous hand, which had five fingers and twenty degrees of freedom (DOFs. All of the finger joints of the YWZ dexterous handwere active joints driven by twenty micro-stepper motors respectively. The main contribution of this paper was that we were able to use stepper motor control to actuate the hand's fingers, thus, increasing the hands feasibility. Based the actuators of the YWZ dexterous hand, we firstly developed an integrated circuit board (ICB, which was the communication hardware between the personal computer (PC and the YWZ dexterous hand. The ICB included a centre controller, twenty driver chips, a USB port and other electrical parts. Then, a communication procedure between the PC and the ICB was developed to send the control commands to actuate the YWZ dexterous hand. Experiment results showed that under this control system, the motion of the YWZ dexterous hand was real-time; both the motion accuracy and the motion stability of the YWZ dexterous hand were reliable. Compared with other types of actuators related to dexterous hands, such as pneumatic servo cylinder, DC servo motor, shape memory alloy etc., experiment results verified that the stepper motors as actuators for the dexterous handswere effective, economical, controllable and stable.

  20. Pattern recognition

    CERN Document Server

    Theodoridis, Sergios

    2003-01-01

    Pattern recognition is a scientific discipline that is becoming increasingly important in the age of automation and information handling and retrieval. Patter Recognition, 2e covers the entire spectrum of pattern recognition applications, from image analysis to speech recognition and communications. This book presents cutting-edge material on neural networks, - a set of linked microprocessors that can form associations and uses pattern recognition to ""learn"" -and enhances student motivation by approaching pattern recognition from the designer's point of view. A direct result of more than 10

  1. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    Science.gov (United States)

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Square bananas, blue horses: The relative weight of shape and color in concept recognition and representation

    Directory of Open Access Journals (Sweden)

    Claudia eScorolli

    2015-10-01

    Full Text Available The present study investigates the role that shape and color play in the representation of animate (i.e. animals and inanimate manipulable entities (i.e. fruits, and how the importance of these features is modulated by different tasks. Across three experiments participants were shown either images of entities (e.g., a sheep or a pineapple or images of the same entities modified in color (e.g. a blue pineapple or in shape (e.g. an elongated pineapple. In Experiment 1 we asked participants to categorize the entities as fruit or animal. Results showed that with animals color does not matter, while shape modifications determined a deterioration of the performance - stronger for fruit than for animals. To better understand the findings, in Experiment 2 participants were asked to judge if entities were graspable (manipulation evaluation task. Participants were faster with manipulable entities (fruit than with animals; moreover alterations in shape affected the response latencies more for animals than for fruit. In Experiment 3 (motion evaluation task, we replicated the disadvantage for shape-altered animals, while with fruits shape and color modifications produced no effect. By contrasting shape- and color- alterations the present findings provide information on shape/color relative weight, suggesting that the action based property of shape is more crucial than color for fruit categorization, while with animals it is critical for both manipulation and motion tasks. This contextual dependency is further revealed by explicit judgments on similarity - between the altered entities and the prototypical ones - provided after the different tasks. These results extend current literature on affordances and biofunctionally embodied understanding, revealing the relative robustness of biofunctional activity compared to intellectual one.

  3. [Effect of opioid receptors on acute stress-induced changes in recognition memory].

    Science.gov (United States)

    Liu, Ying; Wu, Yu-Wei; Qian, Zhao-Qiang; Yan, Cai-Fang; Fan, Ka-Min; Xu, Jin-Hui; Li, Xiao; Liu, Zhi-Qiang

    2016-12-25

    Although ample evidence has shown that acute stress impairs memory, the influences of acute stress on different phases of memory, such as acquisition, consolidation and retrieval, are different. Experimental data from both human and animals support that endogenous opioid system plays a role in stress, as endogenous opioid release is increased and opioid receptors are activated during stress experience. On the other hand, endogenous opioid system mediates learning and memory. The aim of the present study was to investigate the effect of acute forced swimming stress on recognition memory of C57 mice and the role of opioid receptors in this process by using a three-day pattern of new object recognition task. The results showed that 15-min acute forced swimming damaged the retrieval of recognition memory, but had no effect on acquisition and consolidation of recognition memory. No significant change of object recognition memory was found in mice that were given naloxone, an opioid receptor antagonist, by intraperitoneal injection. But intraperitoneal injection of naloxone before forced swimming stress could inhibit the impairment of recognition memory retrieval caused by forced swimming stress. The results of real-time PCR showed that acute forced swimming decreased the μ opioid receptor mRNA levels in whole brain and hippocampus, while the injection of naloxone before stress could reverse this change. These results suggest that acute stress may impair recognition memory retrieval via opioid receptors.

  4. Memory Asymmetry of Forward and Backward Associations in Recognition Tasks

    Science.gov (United States)

    Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han

    2013-01-01

    There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326

  5. Two speed factors of visual recognition independently correlated with fluid intelligence.

    Science.gov (United States)

    Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki

    2014-01-01

    Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one's IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR).

  6. It takes two-skilled recognition of objects engages lateral areas in both hemispheres.

    Directory of Open Access Journals (Sweden)

    Merim Bilalić

    Full Text Available Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas.

  7. Human Identification at a Distance Using Body Shape Information

    International Nuclear Information System (INIS)

    Rashid, N K A M; Yahya, M F; Shafie, A A

    2013-01-01

    Shape of human body is unique from one person to another. This paper presents an intelligent system approach for human identification at a distance using human body shape information. The body features used are the head, shoulder, and trunk. Image processing techniques for detection of these body features were developed in this work. Then, the features are recognized using fuzzy logic approach and used as inputs to a recognition system based on a multilayer neural network. The developed system is only applicable for recognizing a person from its frontal view and specifically constrained to male gender to simplify the algorithm. In this research, the accuracy for human identification using the proposed method is 77.5%. Thus, it is proved that human can be identified at a distance using body shape information

  8. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    Science.gov (United States)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  9. 3D face modeling, analysis and recognition

    CERN Document Server

    Daoudi, Mohamed; Veltkamp, Remco

    2013-01-01

    3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s

  10. Left is where the L is right. Significantly delayed reaction time in limb laterality recognition in both CRPS and phantom limb pain patients.

    Science.gov (United States)

    Reinersmann, Annika; Haarmeyer, Golo Sung; Blankenburg, Markus; Frettlöh, Jule; Krumova, Elena K; Ocklenburg, Sebastian; Maier, Christoph

    2010-12-17

    The body schema is based on an intact cortical body representation. Its disruption is indicated by delayed reaction times (RT) and high error rates when deciding on the laterality of a pictured hand in a limb laterality recognition task. Similarities in both cortical reorganisation and disrupted body schema have been found in two different unilateral pain syndromes, one with deafferentation (phantom limb pain, PLP) and one with pain-induced dysfunction (complex regional pain syndrome, CRPS). This study aims to compare the extent of impaired laterality recognition in these two groups. Performance on a test battery for attentional performance (TAP 2.0) and on a limb laterality recognition task was evaluated in CRPS (n=12), PLP (n=12) and healthy subjects (n=38). Differences between recognising affected and unaffected hands were analysed. CRPS patients and healthy subjects additionally completed a four-day training of limb laterality recognition. Reaction time was significantly delayed in both CRPS (2278±735.7ms) and PLP (2301.3±809.3ms) compared to healthy subjects (1826.5±517.0ms), despite normal TAP values in all groups. There were no differences between recognition of affected and unaffected hands in both patient groups. Both healthy subjects and CRPS patients improved during training, but RTs of CRPS patients (1874.5±613.3ms) remain slower (pCRPS patients, uninfluenced by attention and pain and cannot be fully reversed by training alone. This suggests the involvement of complex central nervous system mechanisms in the disruption of the body schema. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Task relevance differentially shapes ventral visual stream sensitivity to visible and invisible faces

    DEFF Research Database (Denmark)

    Kouider, Sid; Barbot, Antoine; Madsen, Kristoffer Hougaard

    2016-01-01

    requires dissociating it from the top-down influences underlying conscious recognition. Here, using visual masking to abolish perceptual consciousness in humans, we report that functional magnetic resonance imaging (fMRI) responses to invisible faces in the fusiform gyrus are enhanced when they are task...... relevance crucially shapes the sensitivity of fusiform regions to face stimuli, leading from enhancement to suppression of neural activity when the top-down influences accruing from conscious recognition are prevented.......Top-down modulations of the visual cortex can be driven by task relevance. Yet, several accounts propose that the perceptual inferences underlying conscious recognition involve similar top-down modulations of sensory responses. Studying the pure impact of task relevance on sensory responses...

  12. Autonomous learning in gesture recognition by using lobe component analysis

    Science.gov (United States)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  13. 8 CFR 292.2 - Organizations qualified for recognition; requests for recognition; withdrawal of recognition...

    Science.gov (United States)

    2010-01-01

    ...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. 292.2...; withdrawal of recognition; accreditation of representatives; roster. (a) Qualifications of organizations. A non-profit religious, charitable, social service, or similar organization established in the United...

  14. An Alternative Myoelectric Pattern Recognition Approach for the Control of Hand Prostheses: A Case Study of Use in Daily Life by a Dysmelia Subject

    Science.gov (United States)

    Ahlberg, Johan; Lendaro, Eva; Hermansson, Liselotte; Håkansson, Bo; Ortiz-Catalan, Max

    2018-01-01

    The functionality of upper limb prostheses can be improved by intuitive control strategies that use bioelectric signals measured at the stump level. One such strategy is the decoding of motor volition via myoelectric pattern recognition (MPR), which has shown promising results in controlled environments and more recently in clinical practice. Moreover, not much has been reported about daily life implementation and real-time accuracy of these decoding algorithms. This paper introduces an alternative approach in which MPR allows intuitive control of four different grips and open/close in a multifunctional prosthetic hand. We conducted a clinical proof-of-concept in activities of daily life by constructing a self-contained, MPR-controlled, transradial prosthetic system provided with a novel user interface meant to log errors during real-time operation. The system was used for five days by a unilateral dysmelia subject whose hand had never developed, and who nevertheless learned to generate patterns of myoelectric activity, reported as intuitive, for multi-functional prosthetic control. The subject was instructed to manually log errors when they occurred via the user interface mounted on the prosthesis. This allowed the collection of information about prosthesis usage and real-time classification accuracy. The assessment of capacity for myoelectric control test was used to compare the proposed approach to the conventional prosthetic control approach, direct control. Regarding the MPR approach, the subject reported a more intuitive control when selecting the different grips, but also a higher uncertainty during proportional continuous movements. This paper represents an alternative to the conventional use of MPR, and this alternative may be particularly suitable for a certain type of amputee patients. Moreover, it represents a further validation of MPR with dysmelia cases. PMID:29637030

  15. Feeling touch on the own hand restores the capacity to visually discriminate it from someone else' hand: Pathological embodiment receding in brain-damaged patients.

    Science.gov (United States)

    Fossataro, Carlotta; Bruno, Valentina; Gindri, Patrizia; Pia, Lorenzo; Berti, Anna; Garbarini, Francesca

    2017-06-23

    The sense of body ownership, i.e., the belief that a specific body part belongs to us, can be selectively impaired in brain-damaged patients. Recently, a pathological form of embodiment has been described in patients who, when the examiner's hand is located in a body-congruent position, systematically claim that it is their own hand (E+ patients). This paradoxical behavior suggests that, in these patients, the altered sense of body ownership also affects their capacity of visually discriminating the body-identity details of the own and the alien hand, even when both hands are clearly visible on the table. Here, we investigated whether, in E+ patients with spared tactile sensibility, a coherent body ownership could be restored by introducing a multisensory conflict between what the patients feel on the own hand and what they see on the alien hand. To this aim, we asked the patients to rate their sense of body ownership over the alien hand, either after segregated tactile stimulations of the own hand (out of view) and of the alien hand (visible) or after synchronous and asynchronous tactile stimulations of both hands, as in the rubber hand illusion set-up. Our results show that, when the tactile sensation perceived on the patient's own hand was in conflict with visual stimuli observed on the examiner's hand, E+ patients noticed the conflict and spontaneously described visual details of the (visible) examiner's hand (e.g., the fingers length, the nails shape, the skin color…), to conclude that it was not their own hand. These data represent the first evidence that, in E+ patients, an incongruent visual-tactile stimulation of the own and of the alien hand reduces, at least transitorily, the delusional body ownership over the alien hand, by restoring the access to the perceptual self-identity system, where visual body identity details are stored. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Powered exoskeleton with palm degrees of freedom for hand rehabilitation.

    Science.gov (United States)

    Richards, Daniel S; Georgilas, Ioannis; Dagnino, Giulio; Dogramadzi, Sanja

    2015-08-01

    Robotic rehabilitation is a currently underutilised field with the potential to allow huge cost savings within healthcare. Existing rehabilitation exoskeletons oversimplify the importance of movement of the hand while undertaking everyday tasks. Within this study, an investigation was undertaken to establish the extent to which the degrees of freedom within the palm affect ability to undertake everyday tasks. Using a 5DT data glove, bend sensing resistors and restrictors of palm movement, 20 participants were recruited to complete tasks that required various hand shapes. Collected data was processed and palm arching trends were identified for each grasping task. It was found that the extent of utilizing arches in the palm varied with each exercise, but was extensively employed throughout. An exoskeleton was subsequently designed with consideration of the identified palm shapes. This design included a number of key features that accommodated for a variety of hand sizes, a novel thumb joint and a series of dorsally mounted servos. Initial exoskeleton testing was undertaken by having a participant complete the same exercises while wearing the exoskeleton. The angles formed by the user during this process were then compared to those recorded by 2 other participants who had completed the same tasks without exoskeleton. It was found that the exoskeleton was capable of forming the required arches for completing the tasks, with differences between participants attributed to individual ergonomic differences.

  17. Preference for orientations commonly viewed for one's own hand in the anterior intraparietal cortex.

    Directory of Open Access Journals (Sweden)

    Regine Zopf

    Full Text Available Brain regions in the intraparietal and the premotor cortices selectively process visual and multisensory events near the hands (peri-hand space. Visual information from the hand itself modulates this processing potentially because it is used to estimate the location of one's own body and the surrounding space. In humans specific occipitotemporal areas process visual information of specific body parts such as hands. Here we used an fMRI block-design to investigate if anterior intraparietal and ventral premotor 'peri-hand areas' exhibit selective responses to viewing images of hands and viewing specific hand orientations. Furthermore, we investigated if the occipitotemporal 'hand area' is sensitive to viewed hand orientation. Our findings demonstrate increased BOLD responses in the left anterior intraparietal area when participants viewed hands and feet as compared to faces and objects. Anterior intraparietal and also occipitotemporal areas in the left hemisphere exhibited response preferences for viewing right hands with orientations commonly viewed for one's own hand as compared to uncommon own hand orientations. Our results indicate that both anterior intraparietal and occipitotemporal areas encode visual limb-specific shape and orientation information.

  18. Finger vein recognition using local line binary pattern.

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin

    2011-01-01

    In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).

  19. In-the-wild facial expression recognition in extreme poses

    Science.gov (United States)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  20. The re-appreciation of the humanities in contemporary philosophy of science: From recognition to exaggeration?

    Directory of Open Access Journals (Sweden)

    Renato Coletto

    2013-06-01

    Full Text Available In the course of the centuries, the ‘reputation’ and status attributed to the humanities underwent different phases. One of their lowest moments can be traced during the positivist period. This article explored the reasons underlying the gradual re-evaluation of the scientific status and relevance of the humanities in the philosophy of science of the 20th century. On the basis of a historical analysis it was argued that on the one hand such recognition is positive because it abolishes an unjustified prejudice that restricted the status of ‘science’ to the natural sciences. On the other hand it was argued that the reasons behind such recognition might not always be sound and may be inspired by (and lead to a certain relativism harbouring undesired consequences. In the final part of this article (dedicated to Prof. J.J. [Ponti] Venter a brief ‘postscript’ sketched his evaluation of the role of philosophy.

  1. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  2. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  3. Neural representation of hand kinematics during prehension in posterior parietal cortex of the macaque monkey.

    Science.gov (United States)

    Chen, Jessie; Reitzen, Shari D; Kohlenstein, Jane B; Gardner, Esther P

    2009-12-01

    Studies of hand manipulation neurons in posterior parietal cortex of monkeys suggest that their spike trains represent objects by the hand postures needed for grasping or by the underlying patterns of muscle activation. To analyze the role of hand kinematics and object properties in a trained prehension task, we correlated the firing rates of neurons in anterior area 5 with hand behaviors as monkeys grasped and lifted knobs of different shapes and locations in the workspace. Trials were divided into four classes depending on the approach trajectory: forward, lateral, and local approaches, and regrasps. The task factors controlled by the animal-how and when he used the hand-appeared to play the principal roles in modulating firing rates of area 5 neurons. In all, 77% of neurons studied (58/75) showed significant effects of approach style on firing rates; 80% of the population responded at higher rates and for longer durations on forward or lateral approaches that included reaching, wrist rotation, and hand preshaping prior to contact, but only 13% distinguished the direction of reach. The higher firing rates in reach trials reflected not only the arm movements needed to direct the hand to the target before contact, but persisted through the contact, grasp, and lift stages. Moreover, the approach style exerted a stronger effect on firing rates than object features, such as shape and location, which were distinguished by half of the population. Forty-three percent of the neurons signaled both the object properties and the hand actions used to acquire them. However, the spread in firing rates evoked by each knob on reach and no-reach trials was greater than distinctions between different objects grasped with the same approach style. Our data provide clear evidence for synergies between reaching and grasping that may facilitate smooth, coordinated actions of the arm and hand.

  4. C-shaped root canal configuration in mandibular second premolar: Report of an unusual case and its endodontic management

    Directory of Open Access Journals (Sweden)

    Dipali Y Shah

    2012-01-01

    Full Text Available The C-shaped root canal system is an aberration of the root canal system in which a characteristic fin or web connects individual canals, resulting in a C-shaped cross section. This configuration has been rarely reported in the mandibular second premolar. The only other known reported case of a C-shaped canal, with its configuration, in relation to mandibular second premolar is of an extracted tooth. The purpose of this report is to describe the diagnosis, configuration and endodontic management of C-shaped root canal in mandibular second premolar. Clinical techniques to addresses the challenges in endodontic disinfection as well as cleaning and shaping of the C-shaped canal, which is prone to endodontic mishaps, are also discussed in this case report. Reporting of this case emphasizes the need and added advantage of using the dental operating microscope hand in hand with conventional radiography in management of the C-shaped root canal configuration.

  5. Manifold Shape: from Differential Geometry to Mathematical Morphology

    OpenAIRE

    Roerdink, J.B.T.M.

    1994-01-01

    Much progress has been made in extending Euclidean mathematical morphology to more complex structures such as complete lattices or spaces with a non-commutative symmetry group. Such generalizations are important for practical situations such as translation and rotation invariant pattern recognition or shape description of patterns on spherical surfaces. Also in computer vision much use is made of spherical mappings to describe the world as seen by a human or machine observer. Stimulated by th...

  6. Mojo Hand, a TALEN design tool for genome editing applications.

    Science.gov (United States)

    Neff, Kevin L; Argue, David P; Ma, Alvin C; Lee, Han B; Clark, Karl J; Ekker, Stephen C

    2013-01-16

    Recent studies of transcription activator-like (TAL) effector domains fused to nucleases (TALENs) demonstrate enormous potential for genome editing. Effective design of TALENs requires a combination of selecting appropriate genetic features, finding pairs of binding sites based on a consensus sequence, and, in some cases, identifying endogenous restriction sites for downstream molecular genetic applications. We present the web-based program Mojo Hand for designing TAL and TALEN constructs for genome editing applications (http://www.talendesign.org). We describe the algorithm and its implementation. The features of Mojo Hand include (1) automatic download of genomic data from the National Center for Biotechnology Information, (2) analysis of any DNA sequence to reveal pairs of binding sites based on a user-defined template, (3) selection of restriction-enzyme recognition sites in the spacer between the TAL monomer binding sites including options for the selection of restriction enzyme suppliers, and (4) output files designed for subsequent TALEN construction using the Golden Gate assembly method. Mojo Hand enables the rapid identification of TAL binding sites for use in TALEN design. The assembly of TALEN constructs, is also simplified by using the TAL-site prediction program in conjunction with a spreadsheet management aid of reagent concentrations and TALEN formulation. Mojo Hand enables scientists to more rapidly deploy TALENs for genome editing applications.

  7. Mojo Hand, a TALEN design tool for genome editing applications

    Directory of Open Access Journals (Sweden)

    Neff Kevin L

    2013-01-01

    Full Text Available Abstract Background Recent studies of transcription activator-like (TAL effector domains fused to nucleases (TALENs demonstrate enormous potential for genome editing. Effective design of TALENs requires a combination of selecting appropriate genetic features, finding pairs of binding sites based on a consensus sequence, and, in some cases, identifying endogenous restriction sites for downstream molecular genetic applications. Results We present the web-based program Mojo Hand for designing TAL and TALEN constructs for genome editing applications (http://www.talendesign.org. We describe the algorithm and its implementation. The features of Mojo Hand include (1 automatic download of genomic data from the National Center for Biotechnology Information, (2 analysis of any DNA sequence to reveal pairs of binding sites based on a user-defined template, (3 selection of restriction-enzyme recognition sites in the spacer between the TAL monomer binding sites including options for the selection of restriction enzyme suppliers, and (4 output files designed for subsequent TALEN construction using the Golden Gate assembly method. Conclusions Mojo Hand enables the rapid identification of TAL binding sites for use in TALEN design. The assembly of TALEN constructs, is also simplified by using the TAL-site prediction program in conjunction with a spreadsheet management aid of reagent concentrations and TALEN formulation. Mojo Hand enables scientists to more rapidly deploy TALENs for genome editing applications.

  8. Hand motion modeling for psychology analysis in job interview using optical flow-history motion image: OF-HMI

    Science.gov (United States)

    Khalifa, Intissar; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    To survive the competition, companies always think about having the best employees. The selection is depended on the answers to the questions of the interviewer and the behavior of the candidate during the interview session. The study of this behavior is always based on a psychological analysis of the movements accompanying the answers and discussions. Few techniques are proposed until today to analyze automatically candidate's non verbal behavior. This paper is a part of a work psychology recognition system; it concentrates in spontaneous hand gesture which is very significant in interviews according to psychologists. We propose motion history representation of hand based on an hybrid approach that merges optical flow and history motion images. The optical flow technique is used firstly to detect hand motions in each frame of a video sequence. Secondly, we use the history motion images (HMI) to accumulate the output of the optical flow in order to have finally a good representation of the hand`s local movement in a global temporal template.

  9. On-Road Vehicle Recognition Using the Symmetry Property and Snake Models

    Directory of Open Access Journals (Sweden)

    Shumin Liu

    2013-12-01

    Full Text Available Vehicle recognition is a fundamental task for advanced driver assistance systems and contributes to the avoidance of collisions with other vehicles. In recent years, numerous approaches using monocular image analysis have been reported for vehicle detection. These approaches are primarily applied in motorway scenarios and may not be suitable for complex urban traffic with a diversity of obstacles and a clustered background. In this paper, stereovision is firstly used to segment potential vehicles from the traffic background. Given that the contour curve is the most straightforward cue for object recognition, we present here a novel method for complete contour curve extraction using symmetry properties and a snake model. Finally, two shape factors, including the aspect ratio and the area ratio calculated from the contour curve, are used to judge whether the object detected is a vehicle or not. The approach presented here was tested with substantial urban traffic images and the experimental results demonstrated that the correction rate for vehicle recognition reaches 93%.

  10. Goal-seeking neural net for recall and recognition

    Science.gov (United States)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  11. Mandarin-Speaking Children’s Speech Recognition: Developmental Changes in the Influences of Semantic Context and F0 Contours

    Directory of Open Access Journals (Sweden)

    Hong Zhou

    2017-06-01

    Full Text Available The goal of this developmental speech perception study was to assess whether and how age group modulated the influences of high-level semantic context and low-level fundamental frequency (F0 contours on the recognition of Mandarin speech by elementary and middle-school-aged children in quiet and interference backgrounds. The results revealed different patterns for semantic and F0 information. One the one hand, age group modulated significantly the use of F0 contours, indicating that elementary school children relied more on natural F0 contours than middle school children during Mandarin speech recognition. On the other hand, there was no significant modulation effect of age group on semantic context, indicating that children of both age groups used semantic context to assist speech recognition to a similar extent. Furthermore, the significant modulation effect of age group on the interaction between F0 contours and semantic context revealed that younger children could not make better use of semantic context in recognizing speech with flat F0 contours compared with natural F0 contours, while older children could benefit from semantic context even when natural F0 contours were altered, thus confirming the important role of F0 contours in Mandarin speech recognition by elementary school children. The developmental changes in the effects of high-level semantic and low-level F0 information on speech recognition might reflect the differences in auditory and cognitive resources associated with processing of the two types of information in speech perception.

  12. Simple shape space for 3D face registration

    Science.gov (United States)

    Košir, Andrej; Perkon, Igor; Bracun, Drago; Tasic, Jurij; Mozina, Janez

    2009-09-01

    Three dimensional (3D) face recognition is a topic getting increasing interest in biometric applications. In our research framework we developed a laser scanner that provides 3D cloud information and texture data. In a user scenario with cooperative subjects with indoor light conditions, we address three problems of 3D face biometrics: the face registration, the formulation of a shape space together with a special designed gradient algorithm and the impact of initial approximation to the convergence of a registration algorithm. By defining the face registration as a problem of aligning a 3D data cloud with a predefined reference template, we solve the registration problem with a second order gradient algorithm working on a shape space designed for reducing the computational complexity of the method.

  13. Performance Comparison Between FEDERICA Hand and LARM Hand

    OpenAIRE

    Carbone, Giuseppe; Rossi, Cesare; Savino, Sergio

    2015-01-01

    This paper describes two robotic hands that have been\\ud developed at University Federico II of Naples and at the\\ud University of Cassino. FEDERICA Hand and LARM Hand\\ud are described in terms of design and operational features.\\ud In particular, careful attention is paid to the differences\\ud between the above-mentioned hands in terms of transmission\\ud systems. FEDERICA Hand uses tendons and pulleys\\ud to drive phalanxes, while LARM Hand uses cross four-bar\\ud linkages. Results of experime...

  14. The Swipe Card Model of Odorant Recognition 

    Directory of Open Access Journals (Sweden)

    Jennifer C. Brookes

    2012-11-01

    Full Text Available Just how we discriminate between the different odours we encounter is notcompletely understood yet. While obviously a matter involving biology, the core issue isa matter for physics: what microscopic interactions enable the receptors in our noses-smallprotein switches—to distinguish scent molecules? We survey what is and is not known aboutthe physical processes that take place when we smell things, highlighting the difficultiesin developing a full understanding of the mechanics of odorant recognition. The maincurrent theories, discussed here, fall into two major groups. One class emphasises thescent molecule's shape, and is described informally as a "lock and key" mechanism. Butthere is another category, which we focus on and which we call "swipe card" theories:the molecular shape must be good enough, but the information that identifies the smellinvolves other factors. One clearly-defined "swipe card" mechanism that we discuss hereis Turin's theory, in which inelastic electron tunnelling is used to discern olfactant vibrationfrequencies. This theory is explicitly quantal, since it requires the molecular vibrations totake in or give out energy only in discrete quanta. These ideas lead to obvious experimentaltests and challenges. We describe the current theory in a form that takes into accountmolecular shape as well as olfactant vibrations. It emerges that this theory can explainmany observations hard to reconcile in other ways. There are still some important gapsin a comprehensive physics-based description of the central steps in odorant recognition. We also discuss how far these ideas carry over to analogous processes involving other smallbiomolecules, like hormones, steroids and neurotransmitters. We conclude with a discussionof possible quantum behaviours in biology more generally, the case of olfaction being justone example. This paper is presented in honour of Prof. Marshall Stoneham who passedaway unexpectedly during its writing. 

  15. Grasps Recognition and Evaluation of Stroke Patients for Supporting Rehabilitation Therapy

    Directory of Open Access Journals (Sweden)

    Beatriz Leon

    2014-01-01

    Full Text Available Stroke survivors often suffer impairments on their wrist and hand. Robot-mediated rehabilitation techniques have been proposed as a way to enhance conventional therapy, based on intensive repeated movements. Amongst the set of activities of daily living, grasping is one of the most recurrent. Our aim is to incorporate the detection of grasps in the machine-mediated rehabilitation framework so that they can be incorporated into interactive therapeutic games. In this study, we developed and tested a method based on support vector machines for recognizing various grasp postures wearing a passive exoskeleton for hand and wrist rehabilitation after stroke. The experiment was conducted with ten healthy subjects and eight stroke patients performing the grasping gestures. The method was tested in terms of accuracy and robustness with respect to intersubjects’ variability and differences between different grasps. Our results show reliable recognition while also indicating that the recognition accuracy can be used to assess the patients’ ability to consistently repeat the gestures. Additionally, a grasp quality measure was proposed to measure the capabilities of the stroke patients to perform grasp postures in a similar way than healthy people. These two measures can be potentially used as complementary measures to other upper limb motion tests.

  16. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  17. FMRI evidence of 'mirror' responses to geometric shapes.

    Science.gov (United States)

    Press, Clare; Catmur, Caroline; Cook, Richard; Widmann, Hannah; Heyes, Cecilia; Bird, Geoffrey

    2012-01-01

    Mirror neurons may be a genetic adaptation for social interaction. Alternatively, the associative hypothesis proposes that the development of mirror neurons is driven by sensorimotor learning, and that, given suitable experience, mirror neurons will respond to any stimulus. This hypothesis was tested using fMRI adaptation to index populations of cells with mirror properties. After sensorimotor training, where geometric shapes were paired with hand actions, BOLD response was measured while human participants experienced runs of events in which shape observation alternated with action execution or observation. Adaptation from shapes to action execution, and critically, observation, occurred in ventral premotor cortex (PMv) and inferior parietal lobule (IPL). Adaptation from shapes to execution indicates that neuronal populations responding to the shapes had motor properties, while adaptation to observation demonstrates that these populations had mirror properties. These results indicate that sensorimotor training induced populations of cells with mirror properties in PMv and IPL to respond to the observation of arbitrary shapes. They suggest that the mirror system has not been shaped by evolution to respond in a mirror fashion to biological actions; instead, its development is mediated by stimulus-general processes of learning within a system adapted for visuomotor control.

  18. Distinguishing familiarity from fluency for the compound word pair effect in associative recognition.

    Science.gov (United States)

    Ahmad, Fahad N; Hockley, William E

    2017-09-01

    We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.

  19. Intersection Recognition and Guide-Path Selection for a Vision-Based AGV in a Bidirectional Flow Network

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2014-03-01

    Full Text Available Vision recognition and RFID perception are used to develop a smart AGV travelling on fixed paths while retaining low-cost, simplicity and reliability. Visible landmarks can describe features of shapes and geometric dimensions of lines and intersections, and RFID tags can directly record global locations on pathways and the local topological relations of crossroads. A topological map is convenient for building and editing without the need for accurate poses when establishing a priori knowledge of a workplace. To obtain the flexibility of bidirectional movement along guide-paths, a camera placed in the centre of the AGV looks downward vertically at landmarks on the floor. A small visual field presents many difficulties for vision guidance, especially for real-time, correct and reliable recognition of multi-branch crossroads. First, the region projection and contour scanning methods are both used to extract the features of shapes. Then LDA is used to reduce the number of the features' dimensions. Third, a hierarchical SVM classifier is proposed to classify their multi-branch patterns once the features of the shapes are complete. Our experiments in landmark recognition and navigation show that low-cost vision systems are insusceptible to visual noises, image breakages and floor changes, and a vision-based AGV can locate itself precisely on its paths, recognize different crossroads intelligently by verifying the conformance of vision and RFID information, and select its next pathway efficiently in a bidirectional flow network.

  20. Back to basics: hand hygiene and surgical hand antisepsis.

    Science.gov (United States)

    Spruce, Lisa

    2013-11-01

    Health care-associated infections (HAIs) are a significant issue in the United States and throughout the world, but following proper hand hygiene practices is the most effective and least expensive way to prevent HAIs. Hand hygiene is inexpensive and protects patients and health care personnel alike. The four general types of hand hygiene that should be performed in the perioperative environment are washing hands that are visibly soiled, hand hygiene using alcohol-based products, surgical hand scrubs, and surgical hand scrubs using an alcohol-based surgical hand rub product. Barriers to proper hand hygiene may include not thinking about it, forgetting, skin irritation, a lack of role models, or a lack of a safety culture. One strategy for improving hand hygiene practices is monitoring hand hygiene as part of a quality improvement project, but the most important aspect for perioperative team members is to set an example for other team members by following proper hand hygiene practices and reminding each other to perform hand hygiene. Copyright © 2013 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  1. THE OCCURRENCE OF THE RADIAL CLUB HAND IN CHILDREN WITH DIFFERENT SYNDROMES

    Directory of Open Access Journals (Sweden)

    Sergey Ivanovich Golyana

    2013-03-01

    Full Text Available Radial club hand is a developmental anomaly of the upper extremity, being characterized as a longitudinal underdevelopment of a forearm and a hand on the radial surface, consisting in a hypo-/ aplazy radial bone and the thumb of various degree of expressiveness. Characteristic symptoms of this developmental anomaly are: shortening and bow-shaped curvature of a forearm, palmar and radial deviation of a hand, underdevelopment of the thumb from its proximal departments and structures, anomaly of development of three-phalanx fingers of a hand (is more often than the 2-4th, violation of a cosmetic condition and functionality of the affected segment. From 2000 for 2012 in FSI SRICO n.a. H.Turner examination and treatment of 23 children with various syndromes at which the radial club hand was revealed are conducted. The main syndromes at which it is revealed radial club hand - Holt-Orama syndrome, TAR- syndrome and VACTERL syndrome. Tactics and techniques of surgical treatment of a radial club hand it various syndromes most often don’t differ from treatment of other types of a radial club hand though demand an individual approach depending on severity and a type of deformation of the upper extremity.

  2. Finger Vein Recognition Using Local Line Binary Pattern

    Directory of Open Access Journals (Sweden)

    Bakhtiar Affendi Rosdi

    2011-11-01

    Full Text Available In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP.

  3. Finger Vein Recognition Using Local Line Binary Pattern

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin

    2011-01-01

    In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP). PMID:22247670

  4. Slow potentials in a melody recognition task.

    Science.gov (United States)

    Verleger, R; Schellberg, D

    1990-01-01

    In a previous study, slow negative shifts were found in the EEG of subjects listening to well-known melodies. The two experiments reported here were designed to investigate the variables to which these slow potentials are related. In the first experiment, two opposite hypotheses were tested: The slow shifts might express subjects' acquaintance with the melodies or, on the contrary, the effort invested to identify them. To this end, some of the melodies were presented in the rhythms of other melodies to make recognition more difficult. Further, melodies rated as very well-known and as very unknown were analysed separately. However, the slow shifts were not affected by these experimental variations. Therefore in the second experiment, on the one hand the purely physical parameters intensity and duration were varied, but this variation had no impact on the slow shifts either. On the other hand, recognition was made more difficult by monotonously repeating the pitch of the 4th tone for the rest of some melodies. The slow negative shifts were enhanced with these monotonous melodies. This enhancement supports the "effort" hypothesis. Accordingly, the ofter shifts obtained in both experiments might likewise reflect effort. But since the task was not demanding, it is suggested that these constant shifts reflect the effort invested for coping with the entire underarousing situation rather than with the task. Frequently, slow eye movements occurred in the same time range as the slow potentials, resulting in EOG potentials spreading to the EEG recording sites. Yet results did not change substantially when the EEG recordings were corrected for the influence of EOG potentials.

  5. Soft object deformation monitoring and learning for model-based robotic hand manipulation.

    Science.gov (United States)

    Cretu, Ana-Maria; Payeur, Pierre; Petriu, Emil M

    2012-06-01

    This paper discusses the design and implementation of a framework that automatically extracts and monitors the shape deformations of soft objects from a video sequence and maps them with force measurements with the goal of providing the necessary information to the controller of a robotic hand to ensure safe model-based deformable object manipulation. Measurements corresponding to the interaction force at the level of the fingertips and to the position of the fingertips of a three-finger robotic hand are associated with the contours of a deformed object tracked in a series of images using neural-network approaches. The resulting model captures the behavior of the object and is able to predict its behavior for previously unseen interactions without any assumption on the object's material. The availability of such models can contribute to the improvement of a robotic hand controller, therefore allowing more accurate and stable grasp while providing more elaborate manipulation capabilities for deformable objects. Experiments performed for different objects, made of various materials, reveal that the method accurately captures and predicts the object's shape deformation while the object is submitted to external forces applied by the robot fingers. The proposed method is also fast and insensitive to severe contour deformations, as well as to smooth changes in lighting, contrast, and background.

  6. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  7. Stiff Hands

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Stiff Hands Email to a friend * required fields ...

  8. Hand Infections

    Science.gov (United States)

    ... All Topics A-Z Videos Infographics Symptom Picker Anatomy Bones Joints Muscles Nerves Vessels Tendons About Hand Surgery What is a Hand Surgeon? What is a Hand Therapist? Media Find a Hand Surgeon Home Anatomy Hand Infections Email to a friend * required fields ...

  9. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  10. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin; Ding, Huaxiong; Huang, Di; Wang, Yunhong; Zhao, Xi; Morvan, Jean-Marie; Chen, Liming

    2015-01-01

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  11. Javanese Character Feature Extraction Based on Shape Energy

    Directory of Open Access Journals (Sweden)

    Galih Hendra Wibowo

    2017-07-01

    Full Text Available Javanese character is one of Indonesia's noble culture, especially in Java. However, the number of Javanese people who are able to read the letter has decreased so that there need to be conservation efforts in the form of a system that is able to recognize the characters. One solution to these problem lies in Optical Character Recognition (OCR studies, where one of its heaviest points lies in feature extraction which is to distinguish each character. Shape Energy is one of feature extraction method with the basic idea of how the character can be distinguished simply through its skeleton. Based on the basic idea, then the development of feature extraction is done based on its components to produce an angular histogram with various variations of multiples angle. Furthermore, the performance test of this method and its basic method is performed in Javanese character dataset, which has been obtained from various images, is 240 data with 19 labels by using K-Nearest Neighbors as its classification method. Performance values were obtained based on the accuracy which is generated through the Cross-Validation process of 80.83% in the angular histogram with an angle of 20 degrees, 23% better than Shape Energy. In addition, other test results show that this method is able to recognize rotated character with the lowest performance value of 86% at 180-degree rotation and the highest performance value of 96.97% at 90-degree rotation. It can be concluded that this method is able to improve the performance of Shape Energy in the form of recognition of Javanese characters as well as robust to the rotation.

  12. Flaw shape reconstruction – an experimental approach

    Directory of Open Access Journals (Sweden)

    Marilena STANCULESCU

    2009-05-01

    Full Text Available Flaws can be classified as acceptable and unacceptable flaws. As a result of nondestructive testing, one takes de decision Admit/Reject regarding the tested product related to some acceptability criteria. In order to take the right decision, one should know the shape and the dimension of the flaw. On the other hand, the flaws considered to be acceptable, develop in time, such that they can become unacceptable. In this case, the knowledge of the shape and dimension of the flaw allows determining the product time life. For interior flaw shape reconstruction the best procedure is the use of difference static magnetic field. We have a stationary magnetic field problem, but we face the problem given by the nonlinear media. This paper presents the results of the experimental work for control specimen with and without flaw.

  13. Estimation of stature from hand and foot dimensions in a Korean population.

    Science.gov (United States)

    Kim, Wonjoon; Kim, Yong Min; Yun, Myung Hwan

    2018-04-01

    The estimation of stature using foot and hand dimensions is essential in the process of personal identification. The shapes of feet and hands vary depending on races and gender, and it is of great importance to design an adequate equation in consideration of variances to estimate stature. This study is based on a total of 5,195 South Korean males and females, aged from 20 to 59 years. Body dimensions of stature, hand length, hand breadth, foot length, and foot breadth were measured according to standard anthropometric procedures. The independent t-test was performed in order to verify significant gender-induced differences and the results showed that there was significant difference between males and females for all the foot-hand dimensions (pfoot length showed highest correlation, whereas the hand breadth showed least correlation. The stepwise regression analysis was conducted, and the results showed that males had the highest prediction accuracy in the regression equation consisting of foot length and hand length (R 2 =0.532), whereas females had the highest accuracy in the regression model consisting of foot length and hand breadth (R 2 =0.437) The findings of this study indicated that hand and foot dimensions can be used to predict the stature of South Korean in the forensic science field. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  14. Comparison of eye imaging pattern recognition using neural network

    Science.gov (United States)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  15. Products recognition on shop-racks from local scale-invariant features

    Science.gov (United States)

    Zawistowski, Jacek; Kurzejamski, Grzegorz; Garbat, Piotr; Naruniec, Jacek

    2016-04-01

    This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.

  16. Contact-free palm-vein recognition based on local invariant features.

    Directory of Open Access Journals (Sweden)

    Wenxiong Kang

    Full Text Available Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs, respectively, which demonstrate the effectiveness of the proposed approach.

  17. Contact-free palm-vein recognition based on local invariant features.

    Science.gov (United States)

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach.

  18. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  19. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition.

    Science.gov (United States)

    Benatti, Simone; Casamassima, Filippo; Milosevic, Bojan; Farella, Elisabetta; Schönle, Philipp; Fateh, Schekeb; Burger, Thomas; Huang, Qiuting; Benini, Luca

    2015-10-01

    Wearable devices offer interesting features, such as low cost and user friendliness, but their use for medical applications is an open research topic, given the limited hardware resources they provide. In this paper, we present an embedded solution for real-time EMG-based hand gesture recognition. The work focuses on the multi-level design of the system, integrating the hardware and software components to develop a wearable device capable of acquiring and processing EMG signals for real-time gesture recognition. The system combines the accuracy of a custom analog front end with the flexibility of a low power and high performance microcontroller for on-board processing. Our system achieves the same accuracy of high-end and more expensive active EMG sensors used in applications with strict requirements on signal quality. At the same time, due to its flexible configuration, it can be compared to the few wearable platforms designed for EMG gesture recognition available on market. We demonstrate that we reach similar or better performance while embedding the gesture recognition on board, with the benefit of cost reduction. To validate this approach, we collected a dataset of 7 gestures from 4 users, which were used to evaluate the impact of the number of EMG channels, the number of recognized gestures and the data rate on the recognition accuracy and on the computational demand of the classifier. As a result, we implemented a SVM recognition algorithm capable of real-time performance on the proposed wearable platform, achieving a classification rate of 90%, which is aligned with the state-of-the-art off-line results and a 29.7 mW power consumption, guaranteeing 44 hours of continuous operation with a 400 mAh battery.

  20. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    Directory of Open Access Journals (Sweden)

    Federica Bianca Rosselli

    2015-03-01

    Full Text Available In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness. In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: i smaller and more scattered; ii only partially preserved across object views; and iii only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.

  1. Emotion recognition from speech: tools and challenges

    Science.gov (United States)

    Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.

    2015-05-01

    Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.

  2. Principal Curvature Measures Estimation and Application to 3D Face Recognition

    KAUST Repository

    Tang, Yinhang

    2017-04-06

    This paper presents an effective 3D face keypoint detection, description and matching framework based on three principle curvature measures. These measures give a unified definition of principle curvatures for both smooth and discrete surfaces. They can be reasonably computed based on the normal cycle theory and the geometric measure theory. The strong theoretical basis of these measures provides us a solid discrete estimation method on real 3D face scans represented as triangle meshes. Based on these estimated measures, the proposed method can automatically detect a set of sparse and discriminating 3D facial feature points. The local facial shape around each 3D feature point is comprehensively described by histograms of these principal curvature measures. To guarantee the pose invariance of these descriptors, three principle curvature vectors of these principle curvature measures are employed to assign the canonical directions. Similarity comparison between faces is accomplished by matching all these curvature-based local shape descriptors using the sparse representation-based reconstruction method. The proposed method was evaluated on three public databases, i.e. FRGC v2.0, Bosphorus, and Gavab. Experimental results demonstrated that the three principle curvature measures contain strong complementarity for 3D facial shape description, and their fusion can largely improve the recognition performance. Our approach achieves rank-one recognition rates of 99.6, 95.7, and 97.9% on the neutral subset, expression subset, and the whole FRGC v2.0 databases, respectively. This indicates that our method is robust to moderate facial expression variations. Moreover, it also achieves very competitive performance on the pose subset (over 98.6% except Yaw 90°) and the occlusion subset (98.4%) of the Bosphorus database. Even in the case of extreme pose variations like profiles, it also significantly outperforms the state-of-the-art approaches with a recognition rate of 57.1%. The

  3. Optical Pattern Recognition

    Science.gov (United States)

    Yu, Francis T. S.; Jutamulia, Suganda

    2008-10-01

    Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.

  4. Interplay between affect and arousal in recognition memory.

    Science.gov (United States)

    Greene, Ciara M; Bahri, Pooja; Soto, David

    2010-07-23

    Emotional states linked to arousal and mood are known to affect the efficiency of cognitive performance. However, the extent to which memory processes may be affected by arousal, mood or their interaction is poorly understood. Following a study phase of abstract shapes, we altered the emotional state of participants by means of exposure to music that varied in both mood and arousal dimensions, leading to four different emotional states: (i) positive mood-high arousal; (ii) positive mood-low arousal; (iii) negative mood-high arousal; (iv) negative mood-low arousal. Following the emotional induction, participants performed a memory recognition test. Critically, there was an interaction between mood and arousal on recognition performance. Memory was enhanced in the positive mood-high arousal and in the negative mood-low arousal states, relative to the other emotional conditions. Neither mood nor arousal alone but their interaction appears most critical to understanding the emotional enhancement of memory.

  5. Interplay between affect and arousal in recognition memory.

    Directory of Open Access Journals (Sweden)

    Ciara M Greene

    2010-07-01

    Full Text Available Emotional states linked to arousal and mood are known to affect the efficiency of cognitive performance. However, the extent to which memory processes may be affected by arousal, mood or their interaction is poorly understood.Following a study phase of abstract shapes, we altered the emotional state of participants by means of exposure to music that varied in both mood and arousal dimensions, leading to four different emotional states: (i positive mood-high arousal; (ii positive mood-low arousal; (iii negative mood-high arousal; (iv negative mood-low arousal. Following the emotional induction, participants performed a memory recognition test. Critically, there was an interaction between mood and arousal on recognition performance. Memory was enhanced in the positive mood-high arousal and in the negative mood-low arousal states, relative to the other emotional conditions.Neither mood nor arousal alone but their interaction appears most critical to understanding the emotional enhancement of memory.

  6. Custom-made silicone hand prosthesis: A case study.

    Science.gov (United States)

    Nayak, S; Lenka, P K; Equebal, A; Biswas, A

    2016-09-01

    Up to now, a cosmetic glove was the most common method for managing transmetacarpal (TMC) and carpometacarpal (CMC) amputations, but it is devoid of markings and body color. At this amputation level, it is very difficult to fit a functional prosthesis because of the short available length, unsightly shape, grafted skin, contracture and lack of functional prosthetic options. A 30-year-old male came to our clinic with amputation at the 1st to 4th carpometacarpal level and a 5th metacarpal that was projected laterally and fused with the carpal bone. The stump had grafted skin, redness, and an unhealed suture line. He complained of pain projected over the metacarpal and suture area. The clinical team members decided to fabricate a custom-made silicone hand prosthesis to accommodate the stump, protect the grafted skin, improve the hand's appearance and provide some passive function. The custom silicone hand prosthesis was fabricated with modified flexible wires to provide passive interphalangeal movement. Basic training, care and maintenance instructions for the prosthesis were given to the patient. The silicone hand prosthesis was able to restore the appearance of the lost digits and provide some passive function. His pain (VAS score) was reduced. Improvement in activities of daily living was found in the DASH questionnaire and Jebsen-Taylor Hand Function test. A silicone glove is a good option for more distal amputations, as it can accommodate any deformity, protect the skin, enhance the appearance and provide functional assistance. This case study provides a simple method to get passively movable fingers after proximal hand amputation. Copyright © 2016. Published by Elsevier Masson SAS.

  7. Development of anthropomorphic robotic hand driven by Pneumatic Artificial Muscles for robotic applications

    Science.gov (United States)

    Farag, Mohannad; Zainul Azlan, Norsinnira; Hayyan Alsibai, Mohammed

    2018-04-01

    This paper presents the design and fabrication of a three-fingered anthropomorphic robotic hand. The fingers are driven by tendons and actuated by human muscle-like actuators known as Pneumatic Artificial Muscle (PAM). The proposed design allows the actuators to be mounted outside the hand where each finger can be driven by one PAM actuator and six indirectly interlinked tendons. With this design, the three-fingered hand has a compact size and a lightweight with a mass of 150.25 grams imitating the human being hand in terms of size and weight. The hand also successfully grasped objects with different shapes and weights up to 500 g. Even though the number of PAM actuators equals the number of Degrees of Freedom (DOF), the design guarantees driving of three joints by only one actuator reducing the number of required actuators from 3 to 1. Therefore, this hand is suitable for researches of robotic applications in terms of design, cost and ability to be equipped with several types of sensors.

  8. Recognition and enforcement of foreign judgments in the Law of Iran and England: a comparative study

    Directory of Open Access Journals (Sweden)

    Abasat Pour Mohammad

    2017-07-01

    Full Text Available The aim of this study was to Recognition and Enforcement of Foreign Judgments in the Law of Iran and England: A Comparative Study. There are a lot of similarities and commonalities between the legal system of Iran and England in the field of recognition and enforcement of the foreign judgments including public discipline and conflicting judgments. Public discipline in England Law is more specific than that of Iran. Being a civil case of the judgment, impossibility of recognition, enforcement of tax and criminal judgments are among the similarities of the two systems. On the other hand, reciprocity, precise of the foreign court, and the jurisdiction governing the nature of the claim are among instances which are different in Iran and England legal systems on the recognizing of the enforcement of foreign judgments.

  9. How does language model size effects speech recognition accuracy for the Turkish language?

    Directory of Open Access Journals (Sweden)

    Behnam ASEFİSARAY

    2016-05-01

    Full Text Available In this paper we aimed at investigating the effect of Language Model (LM size on Speech Recognition (SR accuracy. We also provided details of our approach for obtaining the LM for Turkish. Since LM is obtained by statistical processing of raw text, we expect that by increasing the size of available data for training the LM, SR accuracy will improve. Since this study is based on recognition of Turkish, which is a highly agglutinative language, it is important to find out the appropriate size for the training data. The minimum required data size is expected to be much higher than the data needed to train a language model for a language with low level of agglutination such as English. In the experiments we also tried to adjust the Language Model Weight (LMW and Active Token Count (ATC parameters of LM as these are expected to be different for a highly agglutinative language. We showed that by increasing the training data size to an appropriate level, the recognition accuracy improved on the other hand changes on LMW and ATC did not have a positive effect on Turkish speech recognition accuracy.

  10. Finger vein recognition with personalized feature selection.

    Science.gov (United States)

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Meng, Xianjing

    2013-08-22

    Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.

  11. Finger Vein Recognition with Personalized Feature Selection

    Directory of Open Access Journals (Sweden)

    Xianjing Meng

    2013-08-01

    Full Text Available Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG. For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.

  12. Toward retail product recognition on grocery shelves

    Science.gov (United States)

    Varol, Gül; Kuzu, Rıdvan S.

    2015-03-01

    This paper addresses the problem of retail product recognition on grocery shelf images. We present a technique for accomplishing this task with a low time complexity. We decompose the problem into detection and recognition. The former is achieved by a generic product detection module which is trained on a specific class of products (e.g. tobacco packages). Cascade object detection framework of Viola and Jones [1] is used for this purpose. We further make use of Support Vector Machines (SVMs) to recognize the brand inside each detected region. We extract both shape and color information; and apply feature-level fusion from two separate descriptors computed with the bag of words approach. Furthermore, we introduce a dataset (available on request) that we have collected for similar research purposes. Results are presented on this dataset of more than 5,000 images consisting of 10 tobacco brands. We show that satisfactory detection and classification can be achieved on devices with cheap computational power. Potential applications of the proposed approach include planogram compliance control, inventory management and assisting visually impaired people during shopping.

  13. New pattern recognition system in the e-nose for Chinese spirit identification

    International Nuclear Information System (INIS)

    Zeng Hui; Li Qiang; Gu Yu

    2016-01-01

    This paper presents a new pattern recognition system for Chinese spirit identification by using the polymer quartz piezoelectric crystal sensor based e-nose. The sensors are designed based on quartz crystal microbalance (QCM) principle, and they could capture different vibration frequency signal values for Chinese spirit identification. For each sensor in an 8-channel sensor array, seven characteristic values of the original vibration frequency signal values, i.e., average value (A), root-mean-square value (RMS), shape factor value (S f ), crest factor value (C f ), impulse factor value (I f ), clearance factor value (CL f ), kurtosis factor value (K v ) are first extracted. Then the dimension of the characteristic values is reduced by the principle components analysis (PCA) method. Finally the back propagation (BP) neutral network algorithm is used to recognize Chinese spirits. The experimental results show that the recognition rate of six kinds of Chinese spirits is 93.33% and our proposed new pattern recognition system can identify Chinese spirits effectively. (paper)

  14. Waving real hand gestures recorded by wearable motion sensors to a virtual car and driver in a mixed-reality parking game

    NARCIS (Netherlands)

    Bannach, D.; Amft, O.D.; Kunze, K.S.; Heinz, E.A.; Tröster, G.; Lukowicz, P.

    2007-01-01

    We envision to add context awareness and ambient intelligence to edutainment and computer gaming applications in general. This requires mixed-reality setups and ever-higher levels of immersive human-computer interaction. Here, we focus on the automatic recognition of natural human hand gestures

  15. A motion-planning method for dexterous hand operating a tool based on bionic analysis

    Directory of Open Access Journals (Sweden)

    Wei Bo

    2017-01-01

    Full Text Available In order to meet the needs of robot’s operating tools for different types and sizes, the dexterous hand is studied by many scientific research institutions. However, the large number of joints in a dexterous hand leads to the difficulty of motion planning. Aiming at this problem, this paper proposes a planning method abased on BPNN inspired by human hands. Firstly, this paper analyses the structure and function of the human hand and summarizes its typical strategy of operation. Secondly, based on the manual operation strategy, the tools are classified according to the shape and the operation mode of the dexterous hand is presented. Thirdly, the BPNN is used to train the humanoid operation, and then output the operation plan. Finally, the simulating experiments of grasping simple tools and operating complex tools are made by MATLAB and ADAMS. The simulation verifies the effectiveness of this method.

  16. Eating tools in hand activate the brain systems for eating action: a transcranial magnetic stimulation study.

    Science.gov (United States)

    Yamaguchi, Kaori; Nakamura, Kimihiro; Oga, Tatsuhide; Nakajima, Yasoichi

    2014-07-01

    There is increasing neuroimaging evidence suggesting that visually presented tools automatically activate the human sensorimotor system coding learned motor actions relevant to the visual stimuli. Such crossmodal activation may reflect a general functional property of the human motor memory and thus can be operating in other, non-limb effector organs, such as the orofacial system involved in eating. In the present study, we predicted that somatosensory signals produced by eating tools in hand covertly activate the neuromuscular systems involved in eating action. In Experiments 1 and 2, we measured motor evoked response (MEP) of the masseter muscle in normal humans to examine the possible impact of tools in hand (chopsticks and scissors) on the neuromuscular systems during the observation of food stimuli. We found that eating tools (chopsticks) enhanced the masseter MEPs more greatly than other tools (scissors) during the visual recognition of food, although this covert change in motor excitability was not detectable at the behavioral level. In Experiment 3, we further observed that chopsticks overall increased MEPs more greatly than scissors and this tool-driven increase of MEPs was greater when participants viewed food stimuli than when they viewed non-food stimuli. A joint analysis of the three experiments confirmed a significant impact of eating tools on the masseter MEPs during food recognition. Taken together, these results suggest that eating tools in hand exert a category-specific impact on the neuromuscular system for eating. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. FMRI evidence of 'mirror' responses to geometric shapes.

    Directory of Open Access Journals (Sweden)

    Clare Press

    Full Text Available Mirror neurons may be a genetic adaptation for social interaction. Alternatively, the associative hypothesis proposes that the development of mirror neurons is driven by sensorimotor learning, and that, given suitable experience, mirror neurons will respond to any stimulus. This hypothesis was tested using fMRI adaptation to index populations of cells with mirror properties. After sensorimotor training, where geometric shapes were paired with hand actions, BOLD response was measured while human participants experienced runs of events in which shape observation alternated with action execution or observation. Adaptation from shapes to action execution, and critically, observation, occurred in ventral premotor cortex (PMv and inferior parietal lobule (IPL. Adaptation from shapes to execution indicates that neuronal populations responding to the shapes had motor properties, while adaptation to observation demonstrates that these populations had mirror properties. These results indicate that sensorimotor training induced populations of cells with mirror properties in PMv and IPL to respond to the observation of arbitrary shapes. They suggest that the mirror system has not been shaped by evolution to respond in a mirror fashion to biological actions; instead, its development is mediated by stimulus-general processes of learning within a system adapted for visuomotor control.

  18. Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview.

    Science.gov (United States)

    Atzori, Manfredo; Müller, Henning

    2015-01-01

    Hand amputation can dramatically affect the capabilities of a person. Cortical reorganization occurs in the brain, but the motor and somatosensorial cortex can interact with the remnant muscles of the missing hand even many years after the amputation, leading to the possibility to restore the capabilities of hand amputees through myoelectric prostheses. Myoelectric hand prostheses with many degrees of freedom are commercially available and recent advances in rehabilitation robotics suggest that their natural control can be performed in real life. The first commercial products exploiting pattern recognition to recognize the movements have recently been released, however the most common control systems are still usually unnatural and must be learned through long training. Dexterous and naturally controlled robotic prostheses can become reality in the everyday life of amputees but the path still requires many steps. This mini-review aims to improve the situation by giving an overview of the advancements in the commercial and scientific domains in order to outline the current and future chances in this field and to foster the integration between market and scientific research.

  19. Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview

    Directory of Open Access Journals (Sweden)

    Manfredo eAtzori

    2015-11-01

    Full Text Available Hand amputation can dramatically affect the capabilities of a person. Cortical reorganization occurs in the brain, but the motor and somatosensorial cortex can interact with the remnant muscles of the missing hand even many years after the amputation, leading to the possibility to restore the capabilities of hand amputees through myoelectric prostheses. Myoelectric hand prostheses with many degrees of freedom are commercially available and recent advances in rehabilitation robotics suggest that their natural control can be performed in real life. The first commercial products exploiting pattern recognition to recognize the movements have recently been released, however the most common control systems are still usually unnatural and must be learned through long training. Dexterous and naturally controlled robotic prostheses can become reality in the everyday life of amputees but the path still requires many steps. This mini-review aims to improve the situation by giving an overview of the advancements in the commercial and scientific domains in order to outline the current and future chances in this field and to foster the integration between market and scientific research.

  20. EthoHand: A dexterous robotic hand with ball-joint thumb enables complex in-hand object manipulation

    OpenAIRE

    Konnaris, C; Gavriel, C; Thomik, AAC; Aldo Faisal, A

    2016-01-01

    Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a no...

  1. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research

    Directory of Open Access Journals (Sweden)

    Laslo Dinges

    2016-03-01

    Full Text Available Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  2. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.

    Science.gov (United States)

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif

    2016-03-11

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  3. Hand-related physical function in rheumatic hand conditions

    DEFF Research Database (Denmark)

    Klokker, Louise; Terwee, Caroline B; Wæhrens, Eva Ejlersen

    2016-01-01

    as well as those items from the Patient Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank that are relevant to patients with rheumatic hand conditions. Selection will be based on consensus among reviewers. Content validity of selected items will be established......INTRODUCTION: There is no consensus about what constitutes the most appropriate patient-reported outcome measurement (PROM) instrument for measuring physical function in patients with rheumatic hand conditions. Existing instruments lack psychometric testing and vary in feasibility...... and their psychometric qualities. We aim to develop a PROM instrument to assess hand-related physical function in rheumatic hand conditions. METHODS AND ANALYSIS: We will perform a systematic search to identify existing PROMs to rheumatic hand conditions, and select items relevant for hand-related physical function...

  4. Hand-related physical function in rheumatic hand conditions

    DEFF Research Database (Denmark)

    Klokker, Louise; Terwee, Caroline; Wæhrens, Eva Elisabet Ejlersen

    2016-01-01

    INTRODUCTION: There is no consensus about what constitutes the most appropriate patient-reported outcome measurement (PROM) instrument for measuring physical function in patients with rheumatic hand conditions. Existing instruments lack psychometric testing and vary in feasibility...... and their psychometric qualities. We aim to develop a PROM instrument to assess hand-related physical function in rheumatic hand conditions. METHODS AND ANALYSIS: We will perform a systematic search to identify existing PROMs to rheumatic hand conditions, and select items relevant for hand-related physical function...... as well as those items from the Patient Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank that are relevant to patients with rheumatic hand conditions. Selection will be based on consensus among reviewers. Content validity of selected items will be established...

  5. Computerized literature reference system: use of an optical scanner and optical character recognition software.

    Science.gov (United States)

    Lossef, S V; Schwartz, L H

    1990-09-01

    A computerized reference system for radiology journal articles was developed by using an IBM-compatible personal computer with a hand-held optical scanner and optical character recognition software. This allows direct entry of scanned text from printed material into word processing or data-base files. Additionally, line diagrams and photographs of radiographs can be incorporated into these files. A text search and retrieval software program enables rapid searching for keywords in scanned documents. The hand scanner and software programs are commercially available, relatively inexpensive, and easily used. This permits construction of a personalized radiology literature file of readily accessible text and images requiring minimal typing or keystroke entry.

  6. The parietal cortices participate in encoding, short-term memory, and decision-making related to tactile shape.

    Science.gov (United States)

    Rojas-Hortelano, Eduardo; Concha, Luis; de Lafuente, Victor

    2014-10-15

    We routinely identify objects with our hands, and the physical attributes of touched objects are often held in short-term memory to aid future decisions. However, the brain structures that selectively process tactile information to encode object shape are not fully identified. In this article we describe the areas within the human cerebral cortex that specialize in encoding, short-term memory, and decision-making related to the shape of objects explored with the hand. We performed event-related functional magnetic resonance imaging in subjects performing a shape discrimination task in which two sequentially presented objects had to be explored to determine whether they had the same shape or not. To control for low-level and nonspecific brain activations, subjects performed a temperature discrimination task in which they compared the temperature of two spheres. Our results show that although a large network of brain structures is engaged in somatosensory processing, it is the areas lining the intraparietal sulcus that selectively participate in encoding, maintaining, and deciding on tactile information related to the shape of objects. Copyright © 2014 the American Physiological Society.

  7. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    Science.gov (United States)

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and

  8. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  9. Congruent bodily arousal promotes the constructive recognition of emotional words.

    Science.gov (United States)

    Kever, Anne; Grynberg, Delphine; Vermeulen, Nicolas

    2017-08-01

    Considerable research has shown that bodily states shape affect and cognition. Here, we examined whether transient states of bodily arousal influence the categorization speed of high arousal, low arousal, and neutral words. Participants realized two blocks of a constructive recognition task, once after a cycling session (increased arousal), and once after a relaxation session (reduced arousal). Results revealed overall faster response times for high arousal compared to low arousal words, and for positive compared to negative words. Importantly, low arousal words were categorized significantly faster after the relaxation than after the cycling, suggesting that a decrease in bodily arousal promotes the recognition of stimuli matching one's current arousal state. These findings highlight the importance of the arousal dimension in emotional processing, and suggest the presence of arousal-congruency effects. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Recognition and Toleration

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2010-01-01

    Recognition and toleration are ways of relating to the diversity characteristic of multicultural societies. The article concerns the possible meanings of toleration and recognition, and the conflict that is often claimed to exist between these two approaches to diversity. Different forms...... or interpretations of recognition and toleration are considered, confusing and problematic uses of the terms are noted, and the compatibility of toleration and recognition is discussed. The article argues that there is a range of legitimate and importantly different conceptions of both toleration and recognition...

  11. Molecular Recognition in the Colloidal World.

    Science.gov (United States)

    Elacqua, Elizabeth; Zheng, Xiaolong; Shillingford, Cicely; Liu, Mingzhu; Weck, Marcus

    2017-11-21

    Colloidal self-assembly is a bottom-up technique to fabricate functional nanomaterials, with paramount interest stemming from programmable assembly of smaller building blocks into dynamic crystalline domains and photonic materials. Multiple established colloidal platforms feature diverse shapes and bonding interactions, while achieving specific orientations along with short- and long-range order. A major impediment to their universal use as building blocks for predesigned architectures is the inability to precisely dictate and control particle functionalization and concomitant reversible self-assembly. Progress in colloidal self-assembly necessitates the development of strategies that endow bonding specificity and directionality within assemblies. Methodologies that emulate molecular and polymeric three-dimensional (3D) architectures feature elements of covalent bonding, while high-fidelity molecular recognition events have been installed to realize responsive reconfigurable assemblies. The emergence of anisotropic 'colloidal molecules', coupled with the ability to site-specifically decorate particle surfaces with supramolecular recognition motifs, has facilitated the formation of superstructures via directional interactions and shape recognition. In this Account, we describe supramolecular assembly routes to drive colloidal particles into precisely assembled architectures or crystalline lattices via directional noncovalent molecular interactions. The design principles are based upon the fabrication of colloidal particles bearing surface-exposed functional groups that can undergo programmable conjugation to install recognition motifs with high fidelity. Modular and versatile by design, our strategy allows for the introduction and integration of molecular recognition principles into the colloidal world. We define noncovalent molecular interactions as site-specific forces that are predictable (i.e., feature selective and controllable complementary bonding partners

  12. Tunable shape memory behaviors of poly(ethylene vinyl acetate) achieved by adding poly(L-lactide)

    International Nuclear Information System (INIS)

    Zhang, Zhi-xing; Liao, Fei; He, Zhen-zhen; Yang, Jing-hui; Huang, Ting; Zhang, Nan; Wang, Yong; Gao, Xiao-ling

    2015-01-01

    In this work, different contents of poly(L-lactide) (PLLA) (20–50 wt%) were introduced into poly(ethylene vinyl acetate) (EVA) to prepare the samples with a tunable shape memory behavior. Morphological characterization demonstrated that with increasing PLLA content from 20 to 50 wt%, the blend morphology changed from sea-island structure to cocontinuous structure. In all the samples, PLLA was amorphous and it did not affect the crystallization of polyethylene part in the EVA component. The presence of PLLA greatly enhanced the storage modulus of samples, especially at relatively low temperatures. The shape memory behaviors of samples were systematically investigated and the results demonstrated that the EVA/PLLA blends exhibited a tunable shape memory effect. On one hand, PLLA accelerated the shape fixation and enhanced the fixity ratio of samples. On the other hand, PLLA reduced the dependence of shape fixity of samples on fixity temperatures. Specifically, for the first time, a critical recovery temperature was observed for the immiscible shape memory polymer blends. In this work, the critical recovery temperature was about 53 °C. At recovery temperature below the critical value, the blends exhibited smaller recovery ratios compared with the pure EVA, however, at recovery temperature above 53 °C, the blends exhibited higher recovery ratios. (paper)

  13. Cross-modal working memory binding and word recognition skills: how specific is the link?

    Science.gov (United States)

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  14. Modeling of hand function by mapping the motion of individual muscle voxels with MR imaging velocity tagging

    International Nuclear Information System (INIS)

    Drace, J.; Pele, N.; Herfkens, R.J.

    1990-01-01

    This paper reports on a method to correlate the three-dimensional (3D) motion of the fingers with the complex motion of the intrinsic, flexor, and extensor muscles. A better understanding of hand function is important to the medical, surgical, and rehabilitation treatment of patients with arthritic, neurogenic, and mechanical hand dysfunctions. Static, high-resolution MR volumetric imaging defines the 3D shape of each individual bone in the hands of three subjects and three patients. Single-section velocity-tagging sequences (VIGOR) are performed through the hand and forearm, while the actual 3D motion of the hand is computed from the MR model and readings of fiber-optic goniometers attached to each finger. The accuracy of the velocity tagging is also tested with a motion phantom

  15. Generation of oculomotor images during tasks requiring visual recognition of polygons.

    Science.gov (United States)

    Olivier, G; de Mendoza, J L

    2001-06-01

    This paper concerns the contribution of mentally simulated ocular exploration to generation of a visual mental image. In Exp. 1, repeated exploration of the outlines of an irregular decagon allowed an incidental learning of the shape. Analyses showed subjects memorized their ocular movements rather than the polygon. In Exp. 2, exploration of a reversible figure such as a Necker cube varied in opposite directions. Then, both perspective possibilities are presented. The perspective the subjects recognized depended on the way they explored the ambiguous figure. In both experiments, during recognition the subjects recalled a visual mental image of the polygon they compared with the different polygons proposed for recognition. To interpret the data, hypotheses concerning common processes underlying both motor intention of ocular movements and generation of a visual image are suggested.

  16. Detection, recognition, identification, and tracking of military vehicles using biomimetic intelligence

    Science.gov (United States)

    Pace, Paul W.; Sutherland, John

    2001-10-01

    This project is aimed at analyzing EO/IR images to provide automatic target detection/recognition/identification (ATR/D/I) of militarily relevant land targets. An increase in performance was accomplished using a biomimetic intelligence system functioning on low-cost, commercially available processing chips. Biomimetic intelligence has demonstrated advanced capabilities in the areas of hand- printed character recognition, real-time detection/identification of multiple faces in full 3D perspectives in cluttered environments, advanced capabilities in classification of ground-based military vehicles from SAR, and real-time ATR/D/I of ground-based military vehicles from EO/IR/HRR data in cluttered environments. The investigation applied these tools to real data sets and examined the parameters such as the minimum resolution for target recognition, the effect of target size, rotation, line-of-sight changes, contrast, partial obscuring, background clutter etc. The results demonstrated a real-time ATR/D/I capability against a subset of militarily relevant land targets operating in a realistic scenario. Typical results on the initial EO/IR data indicate probabilities of correct classification of resolved targets to be greater than 95 percent.

  17. Shape Analysis of Planar Multiply-Connected Objects Using Conformal Welding.

    Science.gov (United States)

    Lok Ming Lui; Wei Zeng; Shing-Tung Yau; Xianfeng Gu

    2014-07-01

    Shape analysis is a central problem in the field of computer vision. In 2D shape analysis, classification and recognition of objects from their observed silhouettes are extremely crucial but difficult. It usually involves an efficient representation of 2D shape space with a metric, so that its mathematical structure can be used for further analysis. Although the study of 2D simply-connected shapes has been subject to a corpus of literatures, the analysis of multiply-connected shapes is comparatively less studied. In this work, we propose a representation for general 2D multiply-connected domains with arbitrary topologies using conformal welding. A metric can be defined on the proposed representation space, which gives a metric to measure dissimilarities between objects. The main idea is to map the exterior and interior of the domain conformally to unit disks and circle domains (unit disk with several inner disks removed), using holomorphic 1-forms. A set of diffeomorphisms of the unit circle S(1) can be obtained, which together with the conformal modules are used to define the shape signature. A shape distance between shape signatures can be defined to measure dissimilarities between shapes. We prove theoretically that the proposed shape signature uniquely determines the multiply-connected objects under suitable normalization. We also introduce a reconstruction algorithm to obtain shapes from their signatures. This completes our framework and allows us to move back and forth between shapes and signatures. With that, a morphing algorithm between shapes can be developed through the interpolation of the Beltrami coefficients associated with the signatures. Experiments have been carried out on shapes extracted from real images. Results demonstrate the efficacy of our proposed algorithm as a stable shape representation scheme.

  18. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  19. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  20. A Continuum Mechanical Approach to Geodesics in Shape Space

    Science.gov (United States)

    2010-01-01

    mean curvature flow equation. Calc. Var., 3:253–271, 1995. [30] Siddharth Manay, Daniel Cremers , Byung-Woo Hong, Anthony J. Yezzi, and Stefano Soatto...P. W. Michor and D. Mumford. Riemannian geometries on spaces of plane curves. J. Eur. Math. Soc., 8:1–48, 2006. 37 [33] Peter W. Michor, David ... Cremers . Shape matching by variational computation of geodesics on a manifold. In Pattern Recognition, LNCS 4174, pages 142–151, 2006. [38] P