WorldWideScience

Sample records for recognition system based

  1. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  2. Research on Face Recognition Based on Embedded System

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Because a number of image feature data to store, complex calculation to execute during the face recognition, therefore the face recognition process was realized only by PCs with high performance. In this paper, the OpenCV facial Haar-like features were used to identify face region; the Principal Component Analysis (PCA was employed in quick extraction of face features and the Euclidean Distance was also adopted in face recognition; as thus, data amount and computational complexity would be reduced effectively in face recognition, and the face recognition could be carried out on embedded platform. Finally, based on Tiny6410 embedded platform, a set of embedded face recognition systems was constructed. The test results showed that the system has stable operation and high recognition rate can be used in portable and mobile identification and authentication.

  3. Enhancement of Iris Recognition System Based on Phase Only Correlation

    Directory of Open Access Journals (Sweden)

    Nuriza Pramita

    2011-08-01

    Full Text Available Iris recognition system is one of biometric based recognition/identification systems. Numerous techniques have been implemented to achieve a good recognition rate, including the ones based on Phase Only Correlation (POC. Significant and higher correlation peaks suggest that the system recognizes iris images of the same subject (person, while lower and unsignificant peaks correspond to recognition of those of difference subjects. Current POC methods have not investigated minimum iris point that can be used to achieve higher correlation peaks. This paper proposed a method that used only one-fourth of full normalized iris size to achieve higher (or at least the same recognition rate. Simulation on CASIA version 1.0 iris image database showed that averaged recognition rate of the proposed method achieved 67%, higher than that of using one-half (56% and full (53% iris point. Furthermore, all (100% POC peak values of the proposed method was higher than that of the method with full iris points.

  4. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  5. An Evaluation of PC-Based Optical Character Recognition Systems.

    Science.gov (United States)

    Schreier, E. M.; Uslan, M. M.

    1991-01-01

    The review examines six personal computer-based optical character recognition (OCR) systems designed for use by blind and visually impaired people. Considered are OCR components and terms, documentation, scanning and reading, command structure, conversion, unique features, accuracy of recognition, scanning time, speed, and cost. (DB)

  6. Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems.

    Science.gov (United States)

    Fang, Fuming; Shinozaki, Takahiro

    2018-01-01

    Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data.

  7. Optical-electronic shape recognition system based on synergetic associative memory

    Science.gov (United States)

    Gao, Jun; Bao, Jie; Chen, Dingguo; Yang, Youqing; Yang, Xuedong

    2001-04-01

    This paper presents a novel optical-electronic shape recognition system based on synergetic associative memory. Our shape recognition system is composed of two parts: the first one is feature extraction system; the second is synergetic pattern recognition system. Hough transform is proposed for feature extraction of unrecognized object, with the effects of reducing dimensions and filtering for object distortion and noise, synergetic neural network is proposed for realizing associative memory in order to eliminate spurious states. Then we adopt an approach of optical- electronic realization to our system that can satisfy the demands of real time, high speed and parallelism. In order to realize fast algorithm, we replace the dynamic evolution circuit with adjudge circuit according to the relationship between attention parameters and order parameters, then implement the recognition of some simple images and its validity is proved.

  8. Method for secure electronic voting system: face recognition based approach

    Science.gov (United States)

    Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran

    2017-06-01

    In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.

  9. Clonal Selection Based Artificial Immune System for Generalized Pattern Recognition

    Science.gov (United States)

    Huntsberger, Terry

    2011-01-01

    The last two decades has seen a rapid increase in the application of AIS (Artificial Immune Systems) modeled after the human immune system to a wide range of areas including network intrusion detection, job shop scheduling, classification, pattern recognition, and robot control. JPL (Jet Propulsion Laboratory) has developed an integrated pattern recognition/classification system called AISLE (Artificial Immune System for Learning and Exploration) based on biologically inspired models of B-cell dynamics in the immune system. When used for unsupervised or supervised classification, the method scales linearly with the number of dimensions, has performance that is relatively independent of the total size of the dataset, and has been shown to perform as well as traditional clustering methods. When used for pattern recognition, the method efficiently isolates the appropriate matches in the data set. The paper presents the underlying structure of AISLE and the results from a number of experimental studies.

  10. Contextual System of Symbol Structural Recognition based on an Object-Process Methodology

    OpenAIRE

    Delalandre, Mathieu

    2005-01-01

    We present in this paper a symbol recognition system for the graphic documents. This one is based on a contextual approach for symbol structural recognition exploiting an Object-Process Methodology. It uses a processing library composed of structural recognition processings and contextual evaluation processings. These processings allow our system to deal with the multi-representation of symbols. The different processings are controlled, in an automatic way, by an inference engine during the r...

  11. Using a data fusion-based activity recognition framework to determine surveillance system requirements

    CSIR Research Space (South Africa)

    Le Roux, WH

    2007-07-01

    Full Text Available A technique is proposed to extract system requirements for a maritime area surveillance system, based on an activity recognition framework originally intended for the characterisation, prediction and recognition of intentional actions for threat...

  12. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  13. Motorcycle Start-stop System based on Intelligent Biometric Voice Recognition

    Science.gov (United States)

    Winda, A.; E Byan, W. R.; Sofyan; Armansyah; Zariantin, D. L.; Josep, B. G.

    2017-03-01

    Current mechanical key in the motorcycle is prone to bulgary, being stolen or misplaced. Intelligent biometric voice recognition as means to replace this mechanism is proposed as an alternative. The proposed system will decide whether the voice is belong to the user or not and the word utter by the user is ‘On’ or ‘Off’. The decision voice will be sent to Arduino in order to start or stop the engine. The recorded voice is processed in order to get some features which later be used as input to the proposed system. The Mel-Frequency Ceptral Coefficient (MFCC) is adopted as a feature extraction technique. The extracted feature is the used as input to the SVM-based identifier. Experimental results confirm the effectiveness of the proposed intelligent voice recognition and word recognition system. It show that the proposed method produces a good training and testing accuracy, 99.31% and 99.43%, respectively. Moreover, the proposed system shows the performance of false rejection rate (FRR) and false acceptance rate (FAR) accuracy of 0.18% and 17.58%, respectively. In the intelligent word recognition shows that the training and testing accuracy are 100% and 96.3%, respectively.

  14. Applications of PCA and SVM-PSO Based Real-Time Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Shieh

    2014-01-01

    Full Text Available This paper incorporates principal component analysis (PCA with support vector machine-particle swarm optimization (SVM-PSO for developing real-time face recognition systems. The integrated scheme aims to adopt the SVM-PSO method to improve the validity of PCA based image recognition systems on dynamically visual perception. The face recognition for most human-robot interaction applications is accomplished by PCA based method because of its dimensionality reduction. However, PCA based systems are only suitable for processing the faces with the same face expressions and/or under the same view directions. Since the facial feature selection process can be considered as a problem of global combinatorial optimization in machine learning, the SVM-PSO is usually used as an optimal classifier of the system. In this paper, the PSO is used to implement a feature selection, and the SVMs serve as fitness functions of the PSO for classification problems. Experimental results demonstrate that the proposed method simplifies features effectively and obtains higher classification accuracy.

  15. Touchless palmprint recognition systems

    CERN Document Server

    Genovese, Angelo; Scotti, Fabio

    2014-01-01

    This book examines the context, motivation and current status of biometric systems based on the palmprint, with a specific focus on touchless and less-constrained systems. It covers new technologies in this rapidly evolving field and is one of the first comprehensive books on palmprint recognition systems.It discusses the research literature and the most relevant industrial applications of palmprint biometrics, including the low-cost solutions based on webcams. The steps of biometric recognition are described in detail, including acquisition setups, algorithms, and evaluation procedures. Const

  16. Cognitive object recognition system (CORS)

    Science.gov (United States)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  17. Comparing grapheme-based and phoneme-based speech recognition for Afrikaans

    CSIR Research Space (South Africa)

    Basson, WD

    2012-11-01

    Full Text Available This paper compares the recognition accuracy of a phoneme-based automatic speech recognition system with that of a grapheme-based system, using Afrikaans as case study. The first system is developed using a conventional pronunciation dictionary...

  18. A multi-view face recognition system based on cascade face detector and improved Dlib

    Science.gov (United States)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  19. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  20. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    Science.gov (United States)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  1. Multimodal Biometric System Based on the Recognition of Face and Both Irises

    Directory of Open Access Journals (Sweden)

    Yeong Gon Kim

    2012-09-01

    Full Text Available The performance of unimodal biometric systems (based on a single modality such as face or fingerprint has to contend with various problems, such as illumination variation, skin condition and environmental conditions, and device variations. Therefore, multimodal biometric systems have been used to overcome the limitations of unimodal biometrics and provide high accuracy recognition. In this paper, we propose a new multimodal biometric system based on score level fusion of face and both irises' recognition. Our study has the following novel features. First, the device proposed acquires images of the face and both irises simultaneously. The proposed device consists of a face camera, two iris cameras, near-infrared illuminators and cold mirrors. Second, fast and accurate iris detection is based on two circular edge detections, which are accomplished in the iris image on the basis of the size of the iris detected in the face image. Third, the combined accuracy is enhanced by combining each score for the face and both irises using a support vector machine. The experimental results show that the equal error rate for the proposed method is 0.131%, which is lower than that of face or iris recognition and other fusion methods.

  2. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    NARCIS (Netherlands)

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  3. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2013-01-01

    Full Text Available With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  4. Man-system interface based on automatic speech recognition: integration to a virtual control desk

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Pereira, Claudio M.N.A.; Aghina, Mauricio Alves C., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b, E-mail: cmnap@ien.gov.b, E-mail: mag@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Nomiya, Diogo V., E-mail: diogonomiya@gmail.co [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2009-07-01

    This work reports the implementation of a man-system interface based on automatic speech recognition, and its integration to a virtual nuclear power plant control desk. The later is aimed to reproduce a real control desk using virtual reality technology, for operator training and ergonomic evaluation purpose. An automatic speech recognition system was developed to serve as a new interface with users, substituting computer keyboard and mouse. They can operate this virtual control desk in front of a computer monitor or a projection screen through spoken commands. The automatic speech recognition interface developed is based on a well-known signal processing technique named cepstral analysis, and on artificial neural networks. The speech recognition interface is described, along with its integration with the virtual control desk, and results are presented. (author)

  5. Man-system interface based on automatic speech recognition: integration to a virtual control desk

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Pereira, Claudio M.N.A.; Aghina, Mauricio Alves C.; Nomiya, Diogo V.

    2009-01-01

    This work reports the implementation of a man-system interface based on automatic speech recognition, and its integration to a virtual nuclear power plant control desk. The later is aimed to reproduce a real control desk using virtual reality technology, for operator training and ergonomic evaluation purpose. An automatic speech recognition system was developed to serve as a new interface with users, substituting computer keyboard and mouse. They can operate this virtual control desk in front of a computer monitor or a projection screen through spoken commands. The automatic speech recognition interface developed is based on a well-known signal processing technique named cepstral analysis, and on artificial neural networks. The speech recognition interface is described, along with its integration with the virtual control desk, and results are presented. (author)

  6. Improving a HMM-based off-line handwriting recognition system using MME-PSO optimization

    Science.gov (United States)

    Hamdani, Mahdi; El Abed, Haikal; Hamdani, Tarek M.; Märgner, Volker; Alimi, Adel M.

    2011-01-01

    One of the trivial steps in the development of a classifier is the design of its architecture. This paper presents a new algorithm, Multi Models Evolvement (MME) using Particle Swarm Optimization (PSO). This algorithm is a modified version of the basic PSO, which is used to the unsupervised design of Hidden Markov Model (HMM) based architectures. For instance, the proposed algorithm is applied to an Arabic handwriting recognizer based on discrete probability HMMs. After the optimization of their architectures, HMMs are trained with the Baum- Welch algorithm. The validation of the system is based on the IfN/ENIT database. The performance of the developed approach is compared to the participating systems at the 2005 competition organized on Arabic handwriting recognition on the International Conference on Document Analysis and Recognition (ICDAR). The final system is a combination between an optimized HMM with 6 other HMMs obtained by a simple variation of the number of states. An absolute improvement of 6% of word recognition rate with about 81% is presented. This improvement is achieved comparing to the basic system (ARAB-IfN). The proposed recognizer outperforms also most of the known state-of-the-art systems.

  7. The nuclear fuel rod character recognition system based on neural network technique

    International Nuclear Information System (INIS)

    Kim, Woong-Ki; Park, Soon-Yong; Lee, Yong-Bum; Kim, Seung-Ho; Lee, Jong-Min; Chien, Sung-Il.

    1994-01-01

    The nuclear fuel rods should be discriminated and managed systematically by numeric characters which are printed at the end part of each rod in the process of producing fuel assembly. The characters are used to examine manufacturing process of the fuel rods in the inspection process of irradiated fuel rod. Therefore automatic character recognition is one of the most important technologies to establish automatic manufacturing process of fuel assembly. In the developed character recognition system, mesh feature set extracted from each character written in the fuel rod is employed to train a neural network based on back-propagation algorithm as a classifier for character recognition system. Performance evaluation has been achieved on a test set which is not included in a training character set. (author)

  8. A Novel Model-Based Driving Behavior Recognition System Using Motion Sensors

    Directory of Open Access Journals (Sweden)

    Minglin Wu

    2016-10-01

    Full Text Available In this article, a novel driving behavior recognition system based on a specific physical model and motion sensory data is developed to promote traffic safety. Based on the theory of rigid body kinematics, we build a specific physical model to reveal the data change rule during the vehicle moving process. In this work, we adopt a nine-axis motion sensor including a three-axis accelerometer, a three-axis gyroscope and a three-axis magnetometer, and apply a Kalman filter for noise elimination and an adaptive time window for data extraction. Based on the feature extraction guided by the built physical model, various classifiers are accomplished to recognize different driving behaviors. Leveraging the system, normal driving behaviors (such as accelerating, braking, lane changing and turning with caution and aggressive driving behaviors (such as accelerating, braking, lane changing and turning with a sudden can be classified with a high accuracy of 93.25%. Compared with traditional driving behavior recognition methods using machine learning only, the proposed system possesses a solid theoretical basis, performs better and has good prospects.

  9. Intelligent Automatic Right-Left Sign Lamp Based on Brain Signal Recognition System

    Science.gov (United States)

    Winda, A.; Sofyan; Sthevany; Vincent, R. S.

    2017-12-01

    Comfort as a part of the human factor, plays important roles in nowadays advanced automotive technology. Many of the current technologies go in the direction of automotive driver assistance features. However, many of the driver assistance features still require physical movement by human to enable the features. In this work, the proposed method is used in order to make certain feature to be functioning without any physical movement, instead human just need to think about it in their mind. In this work, brain signal is recorded and processed in order to be used as input to the recognition system. Right-Left sign lamp based on the brain signal recognition system can potentially replace the button or switch of the specific device in order to make the lamp work. The system then will decide whether the signal is ‘Right’ or ‘Left’. The decision of the Right-Left side of brain signal recognition will be sent to a processing board in order to activate the automotive relay, which will be used to activate the sign lamp. Furthermore, the intelligent system approach is used to develop authorized model based on the brain signal. Particularly Support Vector Machines (SVMs)-based classification system is used in the proposed system to recognize the Left-Right of the brain signal. Experimental results confirm the effectiveness of the proposed intelligent Automatic brain signal-based Right-Left sign lamp access control system. The signal is processed by Linear Prediction Coefficient (LPC) and Support Vector Machines (SVMs), and the resulting experiment shows the training and testing accuracy of 100% and 80%, respectively.

  10. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture

    Directory of Open Access Journals (Sweden)

    Yuanhong Zhong

    2018-05-01

    Full Text Available Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO object detection, the classification method and fine counting based on Support Vector Machines (SVM using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.

  11. Gait recognition based on integral outline

    Science.gov (United States)

    Ming, Guan; Fang, Lv

    2017-02-01

    Biometric identification technology replaces traditional security technology, which has become a trend, and gait recognition also has become a hot spot of research because its feature is difficult to imitate and theft. This paper presents a gait recognition system based on integral outline of human body. The system has three important aspects: the preprocessing of gait image, feature extraction and classification. Finally, using a method of polling to evaluate the performance of the system, and summarizing the problems existing in the gait recognition and the direction of development in the future.

  12. A Biometric Face Recognition System Using an Algorithm Based on the Principal Component Analysis Technique

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-06-01

    Full Text Available This article deals with a recognition system using an algorithm based on the Principal Component Analysis (PCA technique. The recognition system consists only of a PC and an integrated video camera. The algorithm is developed in MATLAB language and calculates the eigenfaces considered as features of the face. The PCA technique is based on the matching between the facial test image and the training prototype vectors. The mathcing score between the facial test image and the training prototype vectors is calculated between their coefficient vectors. If the matching is high, we have the best recognition. The results of the algorithm based on the PCA technique are very good, even if the person looks from one side at the video camera.

  13. A neural network based artificial vision system for licence plate recognition.

    Science.gov (United States)

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.

  14. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  15. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Science.gov (United States)

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  16. Case-Based Policy and Goal Recognition

    Science.gov (United States)

    2015-09-30

    Policy and Goal Recognizer (PaGR), a case- based system for multiagent keyhole recognition. PaGR is a knowledge recognition component within a decision...However, unlike our agent in the BVR domain, these recognition agents have access to perfect information. Single-agent keyhole plan recognition can be...listed below: 1. Facing Target 2. Closing on Target 3. Target Range 4. Within a Target’s Weapon Range 5. Has Target within Weapon Range 6. Is in Danger

  17. Hybrid gesture recognition system for short-range use

    Science.gov (United States)

    Minagawa, Akihiro; Fan, Wei; Katsuyama, Yutaka; Takebe, Hiroaki; Ozawa, Noriaki; Hotta, Yoshinobu; Sun, Jun

    2012-03-01

    In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the camera's field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.

  18. Improved RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Oliu Simon, Marc; Corneanu, Ciprian; Nasrollahi, Kamal

    2016-01-01

    years. At the same time a multimodal facial recognition is a promising approach. This paper combines the latest successes in both directions by applying deep learning Convolutional Neural Networks (CNN) to the multimodal RGB-D-T based facial recognition problem outperforming previously published results......Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent...

  19. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Directory of Open Access Journals (Sweden)

    Jiangyi Qin

    Full Text Available A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  20. A Russian Keyword Spotting System Based on Large Vocabulary Continuous Speech Recognition and Linguistic Knowledge

    Directory of Open Access Journals (Sweden)

    Valentin Smirnov

    2016-01-01

    Full Text Available The paper describes the key concepts of a word spotting system for Russian based on large vocabulary continuous speech recognition. Key algorithms and system settings are described, including the pronunciation variation algorithm, and the experimental results on the real-life telecom data are provided. The description of system architecture and the user interface is provided. The system is based on CMU Sphinx open-source speech recognition platform and on the linguistic models and algorithms developed by Speech Drive LLC. The effective combination of baseline statistic methods, real-world training data, and the intensive use of linguistic knowledge led to a quality result applicable to industrial use.

  1. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  2. Recognition of chemical entities: combining dictionary-based and grammar-based approaches

    Science.gov (United States)

    2015-01-01

    Background The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. Results The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. Conclusions We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named

  3. Recognition of chemical entities: combining dictionary-based and grammar-based approaches.

    Science.gov (United States)

    Akhondi, Saber A; Hettne, Kristina M; van der Horst, Eelke; van Mulligen, Erik M; Kors, Jan A

    2015-01-01

    The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming

  4. A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data

    Directory of Open Access Journals (Sweden)

    Alessandro Manzi

    2017-05-01

    Full Text Available Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM, trained with Sequential Minimal Optimization (SMO. The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60 and the Telecommunication Systems Team (TST Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.

  5. A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data.

    Science.gov (United States)

    Manzi, Alessandro; Dario, Paolo; Cavallo, Filippo

    2017-05-11

    Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM), trained with Sequential Minimal Optimization (SMO). The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60) and the Telecommunication Systems Team (TST) Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames) and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.

  6. IoT-Based Image Recognition System for Smart Home-Delivered Meal Services

    Directory of Open Access Journals (Sweden)

    Hsiao-Ting Tseng

    2017-07-01

    Full Text Available Population ageing is an important global issue. The Taiwanese government has used various Internet of Things (IoT applications in the “10-year long-term care program 2.0”. It is expected that the efficiency and effectiveness of long-term care services will be improved through IoT support. Home-delivered meal services for the elderly are important for home-based long-term care services. To ensure that the right meals are delivered to the right recipient at the right time, the runners need to take a picture of the meal recipient when the meal is delivered. This study uses the IoT-based image recognition system to design an integrated service to improve the management of image recognition. The core technology of this IoT-based image recognition system is statistical histogram-based k-means clustering for image segmentation. However, this method is time-consuming. Therefore, we proposed using the statistical histogram to obtain a probability density function of pixels of a figure and segmenting these with weighting for the same intensity. This aims to increase the computational performance and achieve the same results as k-means clustering. We combined histogram and k-means clustering in order to overcome the high computational cost for k-means clustering. The results indicate that the proposed method is significantly faster than k-means clustering by more than 10 times.

  7. THE DESIGN OF KNOWLEDGE BASE FOR SURFACE RELATIONS BASED PART RECOGNITION APPROACH

    Directory of Open Access Journals (Sweden)

    Adem ÇİÇEK

    2007-01-01

    Full Text Available In this study, a new knowledge base for an expert system used in part recognition algorithm has been designed. Parts are recognized by the computer program by comparing face adjacency relations and attributes belonging to each part represented in the rules in the knowledge base developed with face adjacency relations and attributes generated from STEP file of the part. Besides, rule writing process has been quite simplified by generating the rules represented in the knowledge base with an automatic rule writing module developed within the system. With the knowledge base and automatic rule writing module used in the part recognition system, simple, intermediate and complex parts can be recognized by a part recognition program.

  8. A Review on Video-Based Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Shian-Ru Ke

    2013-06-01

    Full Text Available This review article surveys extensively the current progresses made toward video-based human activity recognition. Three aspects for human activity recognition are addressed including core technology, human activity recognition systems, and applications from low-level to high-level representation. In the core technology, three critical processing stages are thoroughly discussed mainly: human object segmentation, feature extraction and representation, activity detection and classification algorithms. In the human activity recognition systems, three main types are mentioned, including single person activity recognition, multiple people interaction and crowd behavior, and abnormal activity recognition. Finally the domains of applications are discussed in detail, specifically, on surveillance environments, entertainment environments and healthcare systems. Our survey, which aims to provide a comprehensive state-of-the-art review of the field, also addresses several challenges associated with these systems and applications. Moreover, in this survey, various applications are discussed in great detail, specifically, a survey on the applications in healthcare monitoring systems.

  9. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    Directory of Open Access Journals (Sweden)

    Yong-Nyuo Shin

    2008-01-01

    Full Text Available Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  10. Primitive Based Action Representation and recognition

    DEFF Research Database (Denmark)

    Baby, Sanmohan

    The presented work is aimed at designing a system that will model and recognize actions and its interaction with objects. Such a system is aimed at facilitating robot task learning. Activity modeling and recognition is very important for its potential applications in surveillance, human-machine i......The presented work is aimed at designing a system that will model and recognize actions and its interaction with objects. Such a system is aimed at facilitating robot task learning. Activity modeling and recognition is very important for its potential applications in surveillance, human......-machine interface, entertainment, biomechanics etc. Recent developments in neuroscience suggest that all actions are a compositions of smaller units called primitives. Current works based on primitives for action recognition uses a supervised framework for specifying the primitives. We propose a method to extract...... primitives automatically. These primitives are to be used to generate actions based on certain rules for combining. These rules are expressed as a stochastic context free grammar. A model merging approach is adopted to learn a Hidden Markov Model to t the observed data sequences. The states of the HMM...

  11. Development of remote handling system based on 3-D shape recognition technique

    International Nuclear Information System (INIS)

    Tomizuka, Chiaki; Takeuchi, Yutaka

    2006-01-01

    In a nuclear facility, the maintenance and repair activities must be done remotely in a radioactive environment. Fuji Electric Systems Co., Ltd. has developed a remote handling system based on 3-D recognition technique. The system recognizes the pose and position of the target to manipulate, and visualizes the scene with the target in 3-D, enabling an operator to handle it easily. This paper introduces the concept and the key features of this system. (author)

  12. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    Science.gov (United States)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  13. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    2013-12-01

    Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.

  14. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Science.gov (United States)

    Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho

    2013-01-01

    Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568

  15. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  16. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  17. Business model for sensor-based fall recognition systems.

    Science.gov (United States)

    Fachinger, Uwe; Schöpke, Birte

    2014-01-01

    AAL systems require, in addition to sophisticated and reliable technology, adequate business models for their launch and sustainable establishment. This paper presents the basic features of alternative business models for a sensor-based fall recognition system which was developed within the context of the "Lower Saxony Research Network Design of Environments for Ageing" (GAL). The models were developed parallel to the R&D process with successive adaptation and concretization. An overview of the basic features (i.e. nine partial models) of the business model is given and the mutual exclusive alternatives for each partial model are presented. The partial models are interconnected and the combinations of compatible alternatives lead to consistent alternative business models. However, in the current state, only initial concepts of alternative business models can be deduced. The next step will be to gather additional information to work out more detailed models.

  18. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  19. Stress reaction process-based hierarchical recognition algorithm for continuous intrusion events in optical fiber prewarning system

    Science.gov (United States)

    Qu, Hongquan; Yuan, Shijiao; Wang, Yanping; Yang, Dan

    2018-04-01

    To improve the recognition performance of optical fiber prewarning system (OFPS), this study proposed a hierarchical recognition algorithm (HRA). Compared with traditional methods, which employ only a complex algorithm that includes multiple extracted features and complex classifiers to increase the recognition rate with a considerable decrease in recognition speed, HRA takes advantage of the continuity of intrusion events, thereby creating a staged recognition flow inspired by stress reaction. HRA is expected to achieve high-level recognition accuracy with less time consumption. First, this work analyzed the continuity of intrusion events and then presented the algorithm based on the mechanism of stress reaction. Finally, it verified the time consumption through theoretical analysis and experiments, and the recognition accuracy was obtained through experiments. Experiment results show that the processing speed of HRA is 3.3 times faster than that of a traditional complicated algorithm and has a similar recognition rate of 98%. The study is of great significance to fast intrusion event recognition in OFPS.

  20. A Vehicle Steering Recognition System Based on Low-Cost Smartphone Sensors

    Directory of Open Access Journals (Sweden)

    Xinhua Liu

    2017-03-01

    Full Text Available Recognizing how a vehicle is steered and then alerting drivers in real time is of utmost importance to the vehicle and driver’s safety, since fatal accidents are often caused by dangerous vehicle maneuvers, such as rapid turns, fast lane-changes, etc. Existing solutions using video or in-vehicle sensors have been employed to identify dangerous vehicle maneuvers, but these methods are subject to the effects of the environmental elements or the hardware is very costly. In the mobile computing era, smartphones have become key tools to develop innovative mobile context-aware systems. In this paper, we present a recognition system for dangerous vehicle steering based on the low-cost sensors found in a smartphone: i.e., the gyroscope and the accelerometer. To identify vehicle steering maneuvers, we focus on the vehicle’s angular velocity, which is characterized by gyroscope data from a smartphone mounted in the vehicle. Three steering maneuvers including turns, lane-changes and U-turns are defined, and a vehicle angular velocity matching algorithm based on Fast Dynamic Time Warping (FastDTW is adopted to recognize the vehicle steering. The results of extensive experiments show that the average accuracy rate of the presented recognition reaches 95%, which implies that the proposed smartphone-based method is suitable for recognizing dangerous vehicle steering maneuvers.

  1. DEVELOPMENT OF HOLE RECOGNITION SYSTEM FROM STEP FILE

    Directory of Open Access Journals (Sweden)

    C. F. Tan

    2017-11-01

    Full Text Available This paper describes the development of Hole Recognition System (HRS for Computer-Aided Process Planning (CAPP using a neutral data format produced by CAD system. The geometrical data of holes is retrieved from STandard for the Exchange of Product model data (STEP. Rule-based algorithm is used during recognising process. Current implementation of feature recognition is limited to simple hole feat ures. Test results are presented to demonstrate the capabilities of the feature recognition algorithm.

  2. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    Science.gov (United States)

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  3. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-04-01

    Full Text Available Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD method for an iris recognition system (iPAD using a near infrared light (NIR camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED. Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM. Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  4. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    Science.gov (United States)

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  5. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Directory of Open Access Journals (Sweden)

    Miguel Gavilán

    2012-01-01

    Full Text Available This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM. A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  6. Complete vision-based traffic sign recognition supported by an I2V communication system.

    Science.gov (United States)

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  7. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  8. An Edge-Based Macao License Plate Recognition System

    Directory of Open Access Journals (Sweden)

    Chi-Man Pun

    2011-04-01

    Full Text Available This paper presents a system to recognize Macao license plates. Sobel edge detector is employed to extract the vertical edges, and an edge composition algorithm is proposed to combine the edges into candidate plate regions. They are further examined on the existence of the character qMq by a verification algorithm. A row separation algorithm is also proposed to cater both one-row and two-row types of plates. Projection analysis and template matching methods are exploited to segment and recognize the characters. Various pre and post processing steps are proposed other than traditional implementation so as to improve the recognition accuracy. This work achieves a high recognition rate of 95%.

  9. Three dimensional pattern recognition using feature-based indexing and rule-based search

    Science.gov (United States)

    Lee, Jae-Kyu

    In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells. This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene. Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage. Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size

  10. Degraded character recognition based on gradient pattern

    Science.gov (United States)

    Babu, D. R. Ramesh; Ravishankar, M.; Kumar, Manish; Wadera, Kevin; Raj, Aakash

    2010-02-01

    Degraded character recognition is a challenging problem in the field of Optical Character Recognition (OCR). The performance of an optical character recognition depends upon printed quality of the input documents. Many OCRs have been designed which correctly identifies the fine printed documents. But, very few reported work has been found on the recognition of the degraded documents. The efficiency of the OCRs system decreases if the input image is degraded. In this paper, a novel approach based on gradient pattern for recognizing degraded printed character is proposed. The approach makes use of gradient pattern of an individual character for recognition. Experiments were conducted on character image that is either digitally written or a degraded character extracted from historical documents and the results are found to be satisfactory.

  11. Man machine interface based on speech recognition

    International Nuclear Information System (INIS)

    Jorge, Carlos A.F.; Aghina, Mauricio A.C.; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2007-01-01

    This work reports the development of a Man Machine Interface based on speech recognition. The system must recognize spoken commands, and execute the desired tasks, without manual interventions of operators. The range of applications goes from the execution of commands in an industrial plant's control room, to navigation and interaction in virtual environments. Results are reported for isolated word recognition, the isolated words corresponding to the spoken commands. For the pre-processing stage, relevant parameters are extracted from the speech signals, using the cepstral analysis technique, that are used for isolated word recognition, and corresponds to the inputs of an artificial neural network, that performs recognition tasks. (author)

  12. A NEW STRATEGY FOR IMPROVING FEATURE SETS IN A DISCRETE HMM­BASED HANDWRITING RECOGNITION SYSTEM

    NARCIS (Netherlands)

    Grandidier, F.; Sabourin, R.; Suen, C.Y.; Gilloux, M.

    2004-01-01

    In this paper we introduce a new strategy for improving a discrete HMM­based handwriting recognition system, by integrating several information sources from specialized feature sets. For a given system, the basic idea is to keep the most discriminative features, and to replace the others with new

  13. Individual recognition based on communication behaviour of male fowl.

    Science.gov (United States)

    Smith, Carolynn L; Taubert, Jessica; Weldon, Kimberly; Evans, Christopher S

    2016-04-01

    Correctly directing social behaviour towards a specific individual requires an ability to discriminate between conspecifics. The mechanisms of individual recognition include phenotype matching and familiarity-based recognition. Communication-based recognition is a subset of familiarity-based recognition wherein the classification is based on behavioural or distinctive signalling properties. Male fowl (Gallus gallus) produce a visual display (tidbitting) upon finding food in the presence of a female. Females typically approach displaying males. However, males may tidbit without food. We used the distinctiveness of the visual display and the unreliability of some males to test for communication-based recognition in female fowl. We manipulated the prior experience of the hens with the males to create two classes of males: S(+) wherein the tidbitting signal was paired with a food reward to the female, and S (-) wherein the tidbitting signal occurred without food reward. We then conducted a sequential discrimination test with hens using a live video feed of a familiar male. The results of the discrimination tests revealed that hens discriminated between categories of males based on their signalling behaviour. These results suggest that fowl possess a communication-based recognition system. This is the first demonstration of live-to-video transfer of recognition in any species of bird. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Robust and Effective Component-based Banknote Recognition for the Blind.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, Yingli

    2012-11-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users.

  15. A Malaysian Vehicle License Plate Localization and Recognition System

    Directory of Open Access Journals (Sweden)

    Ganapathy Velappa

    2008-02-01

    Full Text Available Technological intelligence is a highly sought after commodity even in traffic-based systems. These intelligent systems do not only help in traffic monitoring but also in commuter safety, law enforcement and commercial applications. In this paper, a license plate localization and recognition system for vehicles in Malaysia is proposed. This system is developed based on digital images and can be easily applied to commercial car park systems for the use of documenting access of parking services, secure usage of parking houses and also to prevent car theft issues. The proposed license plate localization algorithm is based on a combination of morphological processes with a modified Hough Transform approach and the recognition of the license plates is achieved by the implementation of the feed-forward backpropagation artificial neural network. Experimental results show an average of 95% successful license plate localization and recognition in a total of 589 images captured from a complex outdoor environment.

  16. Automated road marking recognition system

    Science.gov (United States)

    Ziyatdinov, R. R.; Shigabiev, R. R.; Talipov, D. N.

    2017-09-01

    Development of the automated road marking recognition systems in existing and future vehicles control systems is an urgent task. One way to implement such systems is the use of neural networks. To test the possibility of using neural network software has been developed with the use of a single-layer perceptron. The resulting system based on neural network has successfully coped with the task both when driving in the daytime and at night.

  17. Face-based recognition techniques: proposals for the metrological characterization of global and feature-based approaches

    Science.gov (United States)

    Betta, G.; Capriglione, D.; Crenna, F.; Rossi, G. B.; Gasparetto, M.; Zappa, E.; Liguori, C.; Paolillo, A.

    2011-12-01

    Security systems based on face recognition through video surveillance systems deserve great interest. Their use is important in several areas including airport security, identification of individuals and access control to critical areas. These systems are based either on the measurement of details of a human face or on a global approach whereby faces are considered as a whole. The recognition is then performed by comparing the measured parameters with reference values stored in a database. The result of this comparison is not deterministic because measurement results are affected by uncertainty due to random variations and/or to systematic effects. In these circumstances the recognition of a face is subject to the risk of a faulty decision. Therefore, a proper metrological characterization is needed to improve the performance of such systems. Suitable methods are proposed for a quantitative metrological characterization of face measurement systems, on which recognition procedures are based. The proposed methods are applied to three different algorithms based either on linear discrimination, on eigenface analysis, or on feature detection.

  18. Face-based recognition techniques: proposals for the metrological characterization of global and feature-based approaches

    International Nuclear Information System (INIS)

    Betta, G; Capriglione, D; Crenna, F; Rossi, G B; Gasparetto, M; Zappa, E; Liguori, C; Paolillo, A

    2011-01-01

    Security systems based on face recognition through video surveillance systems deserve great interest. Their use is important in several areas including airport security, identification of individuals and access control to critical areas. These systems are based either on the measurement of details of a human face or on a global approach whereby faces are considered as a whole. The recognition is then performed by comparing the measured parameters with reference values stored in a database. The result of this comparison is not deterministic because measurement results are affected by uncertainty due to random variations and/or to systematic effects. In these circumstances the recognition of a face is subject to the risk of a faulty decision. Therefore, a proper metrological characterization is needed to improve the performance of such systems. Suitable methods are proposed for a quantitative metrological characterization of face measurement systems, on which recognition procedures are based. The proposed methods are applied to three different algorithms based either on linear discrimination, on eigenface analysis, or on feature detection

  19. Multispectral iris recognition based on group selection and game theory

    Science.gov (United States)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  20. A Kinect based sign language recognition system using spatio-temporal features

    Science.gov (United States)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  1. Design and Implementation of Behavior Recognition System Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Yu Bo

    2017-01-01

    Full Text Available We build a set of human behavior recognition system based on the convolution neural network constructed for the specific human behavior in public places. Firstly, video of human behavior data set will be segmented into images, then we process the images by the method of background subtraction to extract moving foreground characters of body. Secondly, the training data sets are trained into the designed convolution neural network, and the depth learning network is constructed by stochastic gradient descent. Finally, the various behaviors of samples are classified and identified with the obtained network model, and the recognition results are compared with the current mainstream methods. The result show that the convolution neural network can study human behavior model automatically and identify human’s behaviors without any manually annotated trainings.

  2. Adamantane in Drug Delivery Systems and Surface Recognition.

    Science.gov (United States)

    Štimac, Adela; Šekutor, Marina; Mlinarić-Majerski, Kata; Frkanec, Leo; Frkanec, Ruža

    2017-02-16

    The adamantane moiety is widely applied in design and synthesis of new drug delivery systems and in surface recognition studies. This review focuses on liposomes, cyclodextrins, and dendrimers based on or incorporating adamantane derivatives. Our recent concept of adamantane as an anchor in the lipid bilayer of liposomes has promising applications in the field of targeted drug delivery and surface recognition. The results reported here encourage the development of novel adamantane-based structures and self-assembled supramolecular systems for basic chemical investigations as well as for biomedical application.

  3. Adamantane in Drug Delivery Systems and Surface Recognition

    Directory of Open Access Journals (Sweden)

    Adela Štimac

    2017-02-01

    Full Text Available The adamantane moiety is widely applied in design and synthesis of new drug delivery systems and in surface recognition studies. This review focuses on liposomes, cyclodextrins, and dendrimers based on or incorporating adamantane derivatives. Our recent concept of adamantane as an anchor in the lipid bilayer of liposomes has promising applications in the field of targeted drug delivery and surface recognition. The results reported here encourage the development of novel adamantane-based structures and self-assembled supramolecular systems for basic chemical investigations as well as for biomedical application.

  4. Feature and score fusion based multiple classifier selection for iris recognition.

    Science.gov (United States)

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  5. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  6. Rotation-invariant neural pattern recognition system with application to coin recognition.

    Science.gov (United States)

    Fukumi, M; Omatu, S; Takeda, F; Kosaka, T

    1992-01-01

    In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to distinguish between a 500 yen coin and a 500 won coin. The results show that the approach works well for variable rotation pattern recognition.

  7. Embedded wavelet-based face recognition under variable position

    Science.gov (United States)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  8. Quality based approach for adaptive face recognition

    Science.gov (United States)

    Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.

  9. Automatic Number Plate Recognition System for IPhone Devices

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2013-06-01

    Full Text Available This paper presents a system for automatic number plate recognition, implemented for devices running the iOS operating system. The methods used for number plate recognition are based on existing methods, but optimized for devices with low hardware resources. To solve the task of automatic number plate recognition we have divided it into the following subtasks: image acquisition, localization of the number plate position on the image and character detection. The first subtask is performed by the camera of an iPhone, the second one is done using image pre-processing methods and template matching. For the character recognition we are using a feed-forward artificial neural network. Each of these methods is presented along with its results.

  10. Automatic TLI recognition system. Part 1: System description

    Energy Technology Data Exchange (ETDEWEB)

    Partin, J.K.; Lassahn, G.D.; Davidson, J.R.

    1994-05-01

    This report describes an automatic target recognition system for fast screening of large amounts of multi-sensor image data, based on low-cost parallel processors. This system uses image data fusion and gives uncertainty estimates. It is relatively low cost, compact, and transportable. The software is easily enhanced to expand the system`s capabilities, and the hardware is easily expandable to increase the system`s speed. This volume gives a general description of the ATR system.

  11. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2014-07-01

    Full Text Available Recent advancements in depth video sensors technologies have made human activity recognition (HAR realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  12. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    Science.gov (United States)

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  13. A food recognition system for diabetic patients based on an optimized bag-of-features model.

    Science.gov (United States)

    Anthimopoulos, Marios M; Gianola, Lauro; Scarnato, Luca; Diem, Peter; Mougiakakou, Stavroula G

    2014-07-01

    Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

  14. Speech recognition systems on the Cell Broadband Engine

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Jones, H; Vaidya, S; Perrone, M; Tydlitat, B; Nanda, A

    2007-04-20

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousands of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.

  15. Event Recognition Based on Deep Learning in Chinese Texts.

    Directory of Open Access Journals (Sweden)

    Yajun Zhang

    Full Text Available Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM. Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN, then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  16. Event Recognition Based on Deep Learning in Chinese Texts.

    Science.gov (United States)

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  17. A Presence-Based Context-Aware Chronic Stress Recognition System

    Directory of Open Access Journals (Sweden)

    Andrej Kos

    2012-11-01

    Full Text Available Stressors encountered in daily life may play an important role in personal well-being. Chronic stress can have a serious long-term impact on our physical as well as our psychological health, due to ongoing increased levels of the chemicals released in the ‘fight or flight’ response. The currently available stress assessment methods are usually not suitable for daily chronic stress measurement. The paper presents a context-aware chronic stress recognition system that addresses this problem. The proposed system obtains contextual data from various mobile sensors and other external sources in order to calculate the impact of ongoing stress. By identifying and visualizing ongoing stress situations of an individual user, he/she is able to modify his/her behavior in order to successfully avoid them. Clinical evaluation of the proposed methodology has been made in parallel by using electrodermal activity sensor. To the best of our knowledge, the system presented herein is the first one that enables recognition of chronic stress situations on the basis of user context.

  18. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    OpenAIRE

    Luqman, Muhammad Muzzamil; Brouard, Thierry; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational g...

  19. Implementation of age and gender recognition system for intelligent digital signage

    Science.gov (United States)

    Lee, Sang-Heon; Sohn, Myoung-Kyu; Kim, Hyunduk

    2015-12-01

    Intelligent digital signage systems transmit customized advertising and information by analyzing users and customers, unlike existing system that presented advertising in the form of broadcast without regard to type of customers. Currently, development of intelligent digital signage system has been pushed forward vigorously. In this study, we designed a system capable of analyzing gender and age of customers based on image obtained from camera, although there are many different methods for analyzing customers. We conducted age and gender recognition experiments using public database. The age/gender recognition experiments were performed through histogram matching method by extracting Local binary patterns (LBP) features after facial area on input image was normalized. The results of experiment showed that gender recognition rate was as high as approximately 97% on average. Age recognition was conducted based on categorization into 5 age classes. Age recognition rates for women and men were about 67% and 68%, respectively when that conducted separately for different gender.

  20. Adamantane in Drug Delivery Systems and Surface Recognition

    OpenAIRE

    Adela Štimac; Marina Šekutor; Kata Mlinarić-Majerski; Leo Frkanec; Ruža Frkanec

    2017-01-01

    The adamantane moiety is widely applied in design and synthesis of new drug delivery systems and in surface recognition studies. This review focuses on liposomes, cyclodextrins, and dendrimers based on or incorporating adamantane derivatives. Our recent concept of adamantane as an anchor in the lipid bilayer of liposomes has promising applications in the field of targeted drug delivery and surface recognition. The results reported here encourage the development of novel adamantane-based struc...

  1. Scale Invariant Gabor Descriptor-Based Noncooperative Iris Recognition

    Directory of Open Access Journals (Sweden)

    Du Yingzi

    2010-01-01

    Full Text Available Abstract A new noncooperative iris recognition method is proposed. In this method, the iris features are extracted using a Gabor descriptor. The feature extraction and comparison are scale, deformation, rotation, and contrast-invariant. It works with off-angle and low-resolution iris images. The Gabor wavelet is incorporated with scale-invariant feature transformation (SIFT for feature extraction to better extract the iris features. Both the phase and magnitude of the Gabor wavelet outputs were used in a novel way for local feature point description. Two feature region maps were designed to locally and globally register the feature points and each subregion in the map is locally adjusted to the dilation/contraction/deformation. We also developed a video-based non-cooperative iris recognition system by integrating video-based non-cooperative segmentation, segmentation evaluation, and score fusion units. The proposed method shows good performance for frontal and off-angle iris matching. Video-based recognition methods can improve non-cooperative iris recognition accuracy.

  2. Scale Invariant Gabor Descriptor-based Noncooperative Iris Recognition

    Directory of Open Access Journals (Sweden)

    Zhi Zhou

    2010-01-01

    Full Text Available A new noncooperative iris recognition method is proposed. In this method, the iris features are extracted using a Gabor descriptor. The feature extraction and comparison are scale, deformation, rotation, and contrast-invariant. It works with off-angle and low-resolution iris images. The Gabor wavelet is incorporated with scale-invariant feature transformation (SIFT for feature extraction to better extract the iris features. Both the phase and magnitude of the Gabor wavelet outputs were used in a novel way for local feature point description. Two feature region maps were designed to locally and globally register the feature points and each subregion in the map is locally adjusted to the dilation/contraction/deformation. We also developed a video-based non-cooperative iris recognition system by integrating video-based non-cooperative segmentation, segmentation evaluation, and score fusion units. The proposed method shows good performance for frontal and off-angle iris matching. Video-based recognition methods can improve non-cooperative iris recognition accuracy.

  3. Flexible Piezoelectric Sensor-Based Gait Recognition

    Directory of Open Access Journals (Sweden)

    Youngsu Cha

    2018-02-01

    Full Text Available Most motion recognition research has required tight-fitting suits for precise sensing. However, tight-suit systems have difficulty adapting to real applications, because people normally wear loose clothes. In this paper, we propose a gait recognition system with flexible piezoelectric sensors in loose clothing. The gait recognition system does not directly sense lower-body angles. It does, however, detect the transition between standing and walking. Specifically, we use the signals from the flexible sensors attached to the knee and hip parts on loose pants. We detect the periodic motion component using the discrete time Fourier series from the signal during walking. We adapt the gait detection method to a real-time patient motion and posture monitoring system. In the monitoring system, the gait recognition operates well. Finally, we test the gait recognition system with 10 subjects, for which the proposed system successfully detects walking with a success rate over 93 %.

  4. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  5. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-10-01

    Full Text Available Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD, is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor based on the observations of the researchers about the difference between real (live and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR camera-based finger-vein recognition system using convolutional neural network (CNN to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA for dimensionality reduction of feature space and support vector machine (SVM for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared

  6. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    Science.gov (United States)

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based

  7. A recurrent dynamic model for correspondence-based face recognition.

    Science.gov (United States)

    Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph

    2008-12-29

    Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems.

  8. Poka Yoke system based on image analysis and object recognition

    Science.gov (United States)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  9. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    Science.gov (United States)

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  10. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  11. Improved pattern recognition systems by hybrid methods

    International Nuclear Information System (INIS)

    Duerr, B.; Haettich, W.; Tropf, H.; Winkler, G.; Fraunhofer-Gesellschaft zur Foerderung der Angewandten Forschung e.V., Karlsruhe

    1978-12-01

    This report describes a combination of statistical and syntactical pattern recongition methods. The hierarchically structured recognition system consists of a conventional statistical classifier, a structural classifier analysing the topological composition of the patterns, a stage reducing the number of hypotheses made by the first two stages, and a mixed stage based on a search for maximum similarity between syntactically generated prototypes and patterns. The stages work on different principles to avoid mistakes made in one stage in the other stages. This concept is applied to the recognition of numerals written without constraints. If no samples are rejected, a recognition rate of 99,5% is obtained. (orig.) [de

  12. Iris recognition based on robust principal component analysis

    Science.gov (United States)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  13. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  14. Inertial Sensor-Based Gait Recognition: A Review

    Science.gov (United States)

    Sprager, Sebastijan; Juric, Matjaz B.

    2015-01-01

    With the recent development of microelectromechanical systems (MEMS), inertial sensors have become widely used in the research of wearable gait analysis due to several factors, such as being easy-to-use and low-cost. Considering the fact that each individual has a unique way of walking, inertial sensors can be applied to the problem of gait recognition where assessed gait can be interpreted as a biometric trait. Thus, inertial sensor-based gait recognition has a great potential to play an important role in many security-related applications. Since inertial sensors are included in smart devices that are nowadays present at every step, inertial sensor-based gait recognition has become very attractive and emerging field of research that has provided many interesting discoveries recently. This paper provides a thorough and systematic review of current state-of-the-art in this field of research. Review procedure has revealed that the latest advanced inertial sensor-based gait recognition approaches are able to sufficiently recognise the users when relying on inertial data obtained during gait by single commercially available smart device in controlled circumstances, including fixed placement and small variations in gait. Furthermore, these approaches have also revealed considerable breakthrough by realistic use in uncontrolled circumstances, showing great potential for their further development and wide applicability. PMID:26340634

  15. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    Science.gov (United States)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  16. Design and implementation of face recognition system based on Windows

    Science.gov (United States)

    Zhang, Min; Liu, Ting; Li, Ailan

    2015-07-01

    In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.

  17. A freely-available authoring system for browser-based CALL with speech recognition

    Directory of Open Access Journals (Sweden)

    Myles O'Brien

    2017-06-01

    Full Text Available A system for authoring browser-based CALL material incorporating Google speech recognition has been developed and made freely available for download. The system provides a teacher with a simple way to set up CALL material, including an optional image, sound or video, which will elicit spoken (and/or typed answers from the user and check them against a list of specified permitted answers, giving feedback with hints when necessary. The teacher needs no HTML or Javascript expertise, just the facilities and ability to edit text files and upload to the Internet. The structure and functioning of the system are explained in detail, and some suggestions are given for practical use. Finally, some of its limitations are described.

  18. Neural Mechanisms and Information Processing in Recognition Systems

    Directory of Open Access Journals (Sweden)

    Mamiko Ozaki

    2014-10-01

    Full Text Available Nestmate recognition is a hallmark of social insects. It is based on the match/mismatch of an identity signal carried by members of the society with that of the perceiving individual. While the behavioral response, amicable or aggressive, is very clear, the neural systems underlying recognition are not fully understood. Here we contrast two alternative hypotheses for the neural mechanisms that are responsible for the perception and information processing in recognition. We focus on recognition via chemical signals, as the common modality in social insects. The first, classical, hypothesis states that upon perception of recognition cues by the sensory system the information is passed as is to the antennal lobes and to higher brain centers where the information is deciphered and compared to a neural template. Match or mismatch information is then transferred to some behavior-generating centers where the appropriate response is elicited. An alternative hypothesis, that of “pre-filter mechanism”, posits that the decision as to whether to pass on the information to the central nervous system takes place in the peripheral sensory system. We suggest that, through sensory adaptation, only alien signals are passed on to the brain, specifically to an “aggressive-behavior-switching center”, where the response is generated if the signal is above a certain threshold.

  19. Exemplar Based Recognition of Visual Shapes

    DEFF Research Database (Denmark)

    Olsen, Søren I.

    2005-01-01

    This paper presents an approach of visual shape recognition based on exemplars of attributed keypoints. Training is performed by storing exemplars of keypoints detected in labeled training images. Recognition is made by keypoint matching and voting according to the labels for the matched keypoint....... The matching is insensitive to rotations, limited scalings and small deformations. The recognition is robust to noise, background clutter and partial occlusion. Recognition is possible from few training images and improve with the number of training images.......This paper presents an approach of visual shape recognition based on exemplars of attributed keypoints. Training is performed by storing exemplars of keypoints detected in labeled training images. Recognition is made by keypoint matching and voting according to the labels for the matched keypoints...

  20. Segment-based acoustic models for continuous speech recognition

    Science.gov (United States)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  1. User-independent accelerometer-based gesture recognition for mobile devices

    Directory of Open Access Journals (Sweden)

    Eduardo METOLA

    2013-07-01

    Full Text Available Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone

  2. Fluid pipeline system leak detection based on neural network and pattern recognition

    International Nuclear Information System (INIS)

    Tang Xiujia

    1998-01-01

    The mechanism of the stress wave propagation along the pipeline system of NPP, caused by turbulent ejection from pipeline leakage, is researched. A series of characteristic index are described in time domain or frequency domain, and compress numerical algorithm is developed for original data compression. A back propagation neural networks (BPNN) with the input matrix composed by stress wave characteristics in time domain or frequency domain is first proposed to classify various situations of the pipeline, in order to detect the leakage in the fluid flow pipelines. The capability of the new method had been demonstrated by experiments and finally used to design a handy instrument for the pipeline leakage detection. Usually a pipeline system has many inner branches and often in adjusting dynamic condition, it is difficult for traditional pipeline diagnosis facilities to identify the difference between inner pipeline operation and pipeline fault. The author first proposed pipeline wave propagation identification by pattern recognition to diagnose pipeline leak. A series of pattern primitives such as peaks, valleys, horizon lines, capstan peaks, dominant relations, slave relations, etc., are used to extract features of the negative pressure wave form. The context-free grammar of symbolic representation of the negative wave form is used, and a negative wave form parsing system with application to structural pattern recognition based on the representation is first proposed to detect and localize leaks of the fluid pipelines

  3. End-Stop Exemplar Based Recognition

    DEFF Research Database (Denmark)

    Olsen, Søren I.

    2003-01-01

    An approach to exemplar based recognition of visual shapes is presented. The shape information is described by attributed interest points (keys) detected by an end-stop operator. The attributes describe the statistics of lines and edges local to the interest point, the position of neighboring int...... interest points, and (in the training phase) a list of recognition names. Recognition is made by a simple voting procedure. Preliminary experiments indicate that the recognition is robust to noise, small deformations, background clutter and partial occlusion....

  4. An artificial odor recognition system is developed for discriminating odors

    Directory of Open Access Journals (Sweden)

    Wisnu Jatmiko

    2002-12-01

    Full Text Available This artificial system consisted of 16 quartz resonator crystals as the sensor array, a frequency modulator and a frequency counter for each sensor that are connected directly to a microcomputer. We have already shown that the artificial odor recognition system with 4 sensors is high enough to discriminate simple odor correctly, however, when it was used to discriminate compound odors, the recognition capability of this system is dropped significantly to be about 40%. Results of experiments show that the developed artificial system with 16 sensors could discriminate compound aroma based on 6 gradient of alcohol concentrations with high recognition rate of 89.9% for non batch processing system, and 82.4% for batch processing of the classes of odors.

  5. Optical character recognition of camera-captured images based on phase features

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  6. DEVELOPMENT OF AUTOMATED SPEECH RECOGNITION SYSTEM FOR EGYPTIAN ARABIC PHONE CONVERSATIONS

    Directory of Open Access Journals (Sweden)

    A. N. Romanenko

    2016-07-01

    Full Text Available The paper deals with description of several speech recognition systems for the Egyptian Colloquial Arabic. The research is based on the CALLHOME Egyptian corpus. The description of both systems, classic: based on Hidden Markov and Gaussian Mixture Models, and state-of-the-art: deep neural network acoustic models is given. We have demonstrated the contribution from the usage of speaker-dependent bottleneck features; for their extraction three extractors based on neural networks were trained. For their training three datasets in several languageswere used:Russian, English and differentArabic dialects.We have studied the possibility of application of a small Modern Standard Arabic (MSA corpus to derive phonetic transcriptions. The experiments have shown that application of the extractor obtained on the basis of the Russian dataset enables to increase significantly the quality of the Arabic speech recognition. We have also stated that the usage of phonetic transcriptions based on modern standard Arabic decreases recognition quality. Nevertheless, system operation results remain applicable in practice. In addition, we have carried out the study of obtained models application for the keywords searching problem solution. The systems obtained demonstrate good results as compared to those published before. Some ways to improve speech recognition are offered.

  7. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.

  8. Euro Banknote Recognition System for Blind People.

    Science.gov (United States)

    Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael

    2017-01-20

    This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.

  9. Arm Motion Recognition and Exercise Coaching System for Remote Interaction

    Directory of Open Access Journals (Sweden)

    Hong Zeng

    2016-01-01

    Full Text Available Arm motion recognition and its related applications have become a promising human computer interaction modal due to the rapid integration of numerical sensors in modern mobile-phones. We implement a mobile-phone-based arm motion recognition and exercise coaching system that can help people carrying mobile-phones to do body exercising anywhere at any time, especially for the persons that have very limited spare time and are constantly traveling across cities. We first design improved k-means algorithm to cluster the collecting 3-axis acceleration and gyroscope data of person actions into basic motions. A learning method based on Hidden Markov Model is then designed to classify and recognize continuous arm motions of both learners and coaches, which also measures the action similarities between the persons. We implement the system on MIUI 2S mobile-phone and evaluate the system performance and its accuracy of recognition.

  10. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya

    2017-01-01

    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  11. The NA50 segmented target and vertex recognition system

    International Nuclear Information System (INIS)

    Bellaiche, F.; Cheynis, B.; Contardo, D.; Drapier, O.; Grossiord, J.Y.; Guichard, A.; Haroutunian, R.; Jacquin, M.; Ohlsson-Malek, F.; Pizzi, J.R.

    1997-01-01

    The NA50 segmented target and vertex recognition system is described. The segmented target consists of 7 sub-targets of 1-2 mm thickness. The vertex recognition system used to determine the sub-target where an interaction has occured is based upon quartz elements which produce Cerenkov light when traversed by charged particles from the interaction. The geometrical arrangement of the quartz elements has been optimized for vertex recognition in 208 Pb-Pb collisions at 158 GeV/nucleon. A simple algorithm provides a vertex recognition efficiency of better than 85% for dimuon trigger events collected with a 1 mm sub-target set-up. A method for recognizing interactions of projectile fragments (nuclei and/or groups of nucleons) is presented. The segmented target allows a large target thickness which together with a high beam intensity (∼10 7 ions/s) enables high statistics measurements. (orig.)

  12. Recognition and management of idiopathic systemic capillary leak syndrome: an evidence-based review.

    Science.gov (United States)

    Baloch, Noor Ul-Ain; Bikak, Marvi; Rehman, Abdul; Rahman, Omar

    2018-05-01

    Idiopathic systemic capillary leak syndrome (SCLS) is a unique disorder characterized by episodes of massive systemic leak of intravascular fluid leading to volume depletion and shock. A typical attack of SCLS consists of prodromal, leak and post-leak phases. Complications, such as compartment syndrome and pulmonary edema, usually develop during the leak and post-leak phases respectively. Judicious intravenous hydration and early use of vasopressors is the cornerstone of management in such cases. Areas covered: The purpose of the present review is to provide an up-to-date, evidence-based review of our understanding of SCLS and its management in the light of currently available evidence. Idiopathic SCLS was first described in 1960 and, since then, more than 250 cases have been reported. A large number of cases have been reported over the past one decade, most likely due to improved recognition. In the acute care setting, most patients with SCLS are managed as per the Surviving Sepsis guidelines and receive aggressive volume resuscitation - which is not the optimal management strategy for such patients. There is a need to raise awareness amongst physicians and clinicians in order to improve recognition of this disorder and ensure its appropriate management.

  13. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  14. Image quality assessment for video stream recognition systems

    Science.gov (United States)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  15. A novel hybrid biometric electronic voting system: integrating finger print face recognition

    International Nuclear Information System (INIS)

    Najam, S.S.; Shaikh, A.Z.; Naqvi, S.

    2018-01-01

    A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis) and K-NN (K-Nearest Neighbor). It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition. (author)

  16. Biometric verification based on grip-pattern recognition

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Bazen, A.M.; Kauffman, J.A.; Hartel, Pieter H.; Delp, Edward J.; Wong, Ping W.

    This paper describes the design, implementation and evaluation of a user-verification system for a smart gun, which is based on grip-pattern recognition. An existing pressure sensor consisting of an array of 44 x 44 piezoresistive elements is used to measure the grip pattern. An interface has been

  17. Biometric verification based on grip-pattern recognition

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Bazen, A.M.; Kauffman, J.A.; Hartel, Pieter H.

    This paper describes the design, implementation and evaluation of a user-verification system for a smart gun, which is based on grip-pattern recognition. An existing pressure sensor consisting of an array of 44 £ 44 piezoresistive elements is used to measure the grip pattern. An interface has been

  18. Cherry Picking Robot Vision Recognition System Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Zhang Qi Rong

    2016-01-01

    Full Text Available Through OpenCV function, the cherry in a natural environment image after image preprocessing, color recognition, threshold segmentation, morphological filtering, edge detection, circle Hough transform, you can draw the cherry’s center and circular contour, to carry out the purpose of the machine picking. The system is simple and effective.

  19. SURVEY OF BIOMETRIC SYSTEMS USING IRIS RECOGNITION

    OpenAIRE

    S.PON SANGEETHA; DR.M.KARNAN

    2014-01-01

    The security plays an important role in any type of organization in today’s life. Iris recognition is one of the leading automatic biometric systems in the area of security which is used to identify the individual person. Biometric systems include fingerprints, facial features, voice recognition, hand geometry, handwriting, the eye retina and the most secured one presented in this paper, the iris recognition. Biometric systems has become very famous in security systems because it is not possi...

  20. Implementation of CT and IHT Processors for Invariant Object Recognition System

    Directory of Open Access Journals (Sweden)

    J. Turan jr.

    2004-12-01

    Full Text Available This paper presents PDL or ASIC implementation of key modules ofinvariant object recognition system based on the combination of theIncremental Hough transform (IHT, correlation and rapid transform(RT. The invariant object recognition system was represented partiallyin C++ language for general-purpose processor on personal computer andpartially described in VHDL code for implementation in PLD or ASIC.

  1. Human-inspired sound environment recognition system for assistive vehicles

    Science.gov (United States)

    González Vidal, Eduardo; Fredes Zarricueta, Ernesto; Auat Cheein, Fernando

    2015-02-01

    Objective. The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. Approach. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. Main results. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. Significance

  2. Euro Banknote Recognition System for Blind People

    Directory of Open Access Journals (Sweden)

    Larisa Dunai Dunai

    2017-01-01

    Full Text Available This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.

  3. Container code recognition in information auto collection system of container inspection

    International Nuclear Information System (INIS)

    Su Jianping; Chen Zhiqiang; Zhang Li; Gao Wenhuan; Kang Kejun

    2003-01-01

    Now custom needs electrical application and automatic detection. Container inspection should not only give the image of the goods, but also auto-attain container's code and weight. Its function and track control, information transfer make up the Information Auto Collection system of Container Inspection. Code Recognition is the point. The article is based on model match, the close property of character, and uses it to recognize. Base on checkout rule, design the adjustment arithmetic, form the whole recognition strategy. This strategy can achieve high recognition ratio and robust property

  4. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  5. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System.

    Science.gov (United States)

    Partila, Pavol; Voznak, Miroslav; Tovarek, Jaromir

    2015-01-01

    The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.

  6. Face recognition system and method using face pattern words and face pattern bytes

    Science.gov (United States)

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  7. 2nd International Symposium on Signal Processing and Intelligent Recognition Systems

    CERN Document Server

    Bandyopadhyay, Sanghamitra; Krishnan, Sri; Li, Kuan-Ching; Mosin, Sergey; Ma, Maode

    2016-01-01

    This Edited Volume contains a selection of refereed and revised papers originally presented at the second International Symposium on Signal Processing and Intelligent Recognition Systems (SIRS-2015), December 16-19, 2015, Trivandrum, India. The program committee received 175 submissions. Each paper was peer reviewed by at least three or more independent referees of the program committee and the 59 papers were finally selected. The papers offer stimulating insights into biometrics, digital watermarking, recognition systems, image and video processing, signal and speech processing, pattern recognition, machine learning and knowledge-based systems. The book is directed to the researchers and scientists engaged in various field of signal processing and related areas. .

  8. An Innovative SIFT-Based Method for Rigid Video Object Recognition

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2014-01-01

    Full Text Available This paper presents an innovative SIFT-based method for rigid video object recognition (hereafter called RVO-SIFT. Just like what happens in the vision system of human being, this method makes the object recognition and feature updating process organically unify together, using both trajectory and feature matching, and thereby it can learn new features not only in the training stage but also in the recognition stage, which can improve greatly the completeness of the video object’s features automatically and, in turn, increases the ratio of correct recognition drastically. The experimental results on real video sequences demonstrate its surprising robustness and efficiency.

  9. Automated recognition system for ELM classification in JET

    International Nuclear Information System (INIS)

    Duro, N.; Dormido, R.; Vega, J.; Dormido-Canto, S.; Farias, G.; Sanchez, J.; Vargas, H.; Murari, A.

    2009-01-01

    Edge localized modes (ELMs) are instabilities occurring in the edge of H-mode plasmas. Considerable efforts are being devoted to understanding the physics behind this non-linear phenomenon. A first characterization of ELMs is usually their identification as type I or type III. An automated pattern recognition system has been developed in JET for off-line ELM recognition and classification. The empirical method presented in this paper analyzes each individual ELM instead of starting from a temporal segment containing many ELM bursts. The ELM recognition and isolation is carried out using three signals: Dα, line integrated electron density and stored diamagnetic energy. A reduced set of characteristics (such as diamagnetic energy drop, ELM period or Dα shape) has been extracted to build supervised and unsupervised learning systems for classification purposes. The former are based on support vector machines (SVM). The latter have been developed with hierarchical and K-means clustering methods. The success rate of the classification systems is about 98% for a database of almost 300 ELMs.

  10. LPI Radar Waveform Recognition Based on Time-Frequency Distribution

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    2016-10-01

    Full Text Available In this paper, an automatic radar waveform recognition system in a high noise environment is proposed. Signal waveform recognition techniques are widely applied in the field of cognitive radio, spectrum management and radar applications, etc. We devise a system to classify the modulating signals widely used in low probability of intercept (LPI radar detection systems. The radar signals are divided into eight types of classifications, including linear frequency modulation (LFM, BPSK (Barker code modulation, Costas codes and polyphase codes (comprising Frank, P1, P2, P3 and P4. The classifier is Elman neural network (ENN, and it is a supervised classification based on features extracted from the system. Through the techniques of image filtering, image opening operation, skeleton extraction, principal component analysis (PCA, image binarization algorithm and Pseudo–Zernike moments, etc., the features are extracted from the Choi–Williams time-frequency distribution (CWD image of the received data. In order to reduce the redundant features and simplify calculation, the features selection algorithm based on mutual information between classes and features vectors are applied. The superiority of the proposed classification system is demonstrated by the simulations and analysis. Simulation results show that the overall ratio of successful recognition (RSR is 94.7% at signal-to-noise ratio (SNR of −2 dB.

  11. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    Pavol Partila

    2015-01-01

    Full Text Available The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.

  12. Improving emotion recognition systems by embedding cardiorespiratory coupling

    International Nuclear Information System (INIS)

    Valenza, Gaetano; Lanatá, Antonio; Scilingo, Enzo Pasquale

    2013-01-01

    This work aims at showing improved performances of an emotion recognition system embedding information gathered from cardiorespiratory (CR) coupling. Here, we propose a novel methodology able to robustly identify up to 25 regions of a two-dimensional space model, namely the well-known circumplex model of affect (CMA). The novelty of embedding CR coupling information in an autonomic nervous system-based feature space better reveals the sympathetic activations upon emotional stimuli. A CR synchrogram analysis was used to quantify such a coupling in terms of number of heartbeats per respiratory period. Physiological data were gathered from 35 healthy subjects emotionally elicited by means of affective pictures of the international affective picture system database. In this study, we finely detected five levels of arousal and five levels of valence as well as the neutral state, whose combinations were used for identifying 25 different affective states in the CMA plane. We show that the inclusion of the bivariate CR measures in a previously developed system based only on monovariate measures of heart rate variability, respiration dynamics and electrodermal response dramatically increases the recognition accuracy of a quadratic discriminant classifier, obtaining more than 90% of correct classification per class. Finally, we propose a comprehensive description of the CR coupling during sympathetic elicitation adapting an existing theoretical nonlinear model with external driving. The theoretical idea behind this model is that the CR system is comprised of weakly coupled self-sustained oscillators that, when exposed to an external perturbation (i.e. sympathetic activity), becomes synchronized and less sensible to input variations. Given the demonstrated role of the CR coupling, this model can constitute a general tool which is easily embedded in other model-based emotion recognition systems. (paper)

  13. Exhibits Recognition System for Combining Online Services and Offline Services

    Science.gov (United States)

    Ma, He; Liu, Jianbo; Zhang, Yuan; Wu, Xiaoyu

    2017-10-01

    In order to achieve a more convenient and accurate digital museum navigation, we have developed a real-time and online-to-offline museum exhibits recognition system using image recognition method based on deep learning. In this paper, the client and server of the system are separated and connected through the HTTP. Firstly, by using the client app in the Android mobile phone, the user can take pictures and upload them to the server. Secondly, the features of the picture are extracted using the deep learning network in the server. With the help of the features, the pictures user uploaded are classified with a well-trained SVM. Finally, the classification results are sent to the client and the detailed exhibition’s introduction corresponding to the classification results are shown in the client app. Experimental results demonstrate that the recognition accuracy is close to 100% and the computing time from the image uploading to the exhibit information show is less than 1S. By means of exhibition image recognition algorithm, our implemented exhibits recognition system can combine online detailed exhibition information to the user in the offline exhibition hall so as to achieve better digital navigation.

  14. A Malaysian Vehicle License Plate Localization and Recognition System

    OpenAIRE

    Ganapathy Velappa; Dennis LUI Wen Lik

    2008-01-01

    Technological intelligence is a highly sought after commodity even in traffic-based systems. These intelligent systems do not only help in traffic monitoring but also in commuter safety, law enforcement and commercial applications. In this paper, a license plate localization and recognition system for vehicles in Malaysia is proposed. This system is developed based on digital images and can be easily applied to commercial car park systems for the use of documenting access of parking services,...

  15. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    Science.gov (United States)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  16. Application of Video Recognition Technology in Landslide Monitoring System

    Directory of Open Access Journals (Sweden)

    Qingjia Meng

    2018-01-01

    Full Text Available The video recognition technology is applied to the landslide emergency remote monitoring system. The trajectories of the landslide are identified by this system in this paper. The system of geological disaster monitoring is applied synthetically to realize the analysis of landslide monitoring data and the combination of video recognition technology. Landslide video monitoring system will video image information, time point, network signal strength, power supply through the 4G network transmission to the server. The data is comprehensively analysed though the remote man-machine interface to conduct to achieve the threshold or manual control to determine the front-end video surveillance system. The system is used to identify the target landslide video for intelligent identification. The algorithm is embedded in the intelligent analysis module, and the video frame is identified, detected, analysed, filtered, and morphological treatment. The algorithm based on artificial intelligence and pattern recognition is used to mark the target landslide in the video screen and confirm whether the landslide is normal. The landslide video monitoring system realizes the remote monitoring and control of the mobile side, and provides a quick and easy monitoring technology.

  17. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  18. A Novel Hybrid Biometric Electronic Voting System: Integrating Finger Print and Face Recognition

    Directory of Open Access Journals (Sweden)

    Shahram Najam

    2018-01-01

    Full Text Available A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis and K-NN (K-Nearest Neighbor. It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition.

  19. SIFT Based Vein Recognition Models: Analysis and Improvement

    Directory of Open Access Journals (Sweden)

    Guoqing Wang

    2017-01-01

    Full Text Available Scale-Invariant Feature Transform (SIFT is being investigated more and more to realize a less-constrained hand vein recognition system. Contrast enhancement (CE, compensating for deficient dynamic range aspects, is a must for SIFT based framework to improve the performance. However, evidence of negative influence on SIFT matching brought by CE is analysed by our experiments. We bring evidence that the number of extracted keypoints resulting by gradient based detectors increases greatly with different CE methods, while on the other hand the matching result of extracted invariant descriptors is negatively influenced in terms of Precision-Recall (PR and Equal Error Rate (EER. Rigorous experiments with state-of-the-art and other CE adopted in published SIFT based hand vein recognition system demonstrate the influence. What is more, an improved SIFT model by importing the kernel of RootSIFT and Mirror Match Strategy into a unified framework is proposed to make use of the positive keypoints change and make up for the negative influence brought by CE.

  20. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.

    Science.gov (United States)

    Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V

    2018-04-01

    Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic

  1. Combat Systems Department Employee Recognition System

    National Research Council Canada - National Science Library

    1996-01-01

    This handbook contains two types of information: guidelines and instructions. The guidelines provide a foundation of purpose, assumptions, principles, expectations and attributes the Employee Recognition System is designed to reflect...

  2. Neuro System Structure for Vehicle Recognition and Count in Floating Bridge Specific Conditions

    Directory of Open Access Journals (Sweden)

    Slobodan Beroš

    2012-10-01

    Full Text Available The paper presents the research of the sophisticated vehiclerecognition and count system based on the application of theneural network. The basic elements of neural network andadaptive logic network for object recognition are discussed. Theadaptive logic network solution ability based on simple digitalcircuits as crucial in real-time applications is pointed out. Thesimulation based on the use of reduced high level noise pictureand a tree 2. 7. software have shown excellent results. The consideredand simulated adaptive neural network based systemwith its good recognition and convergence is a useful real-timesolution for vehicle recognition and count in the floating bridgesevere conditions.

  3. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    Directory of Open Access Journals (Sweden)

    Shouyi Yin

    2015-01-01

    Full Text Available Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  4. Privacy protection schemes for fingerprint recognition systems

    Science.gov (United States)

    Marasco, Emanuela; Cukic, Bojan

    2015-05-01

    The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

  5. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.

    Science.gov (United States)

    Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung

    2017-06-06

    Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.

  6. Fingerprint recognition system by use of graph matching

    Science.gov (United States)

    Shen, Wei; Shen, Jun; Zheng, Huicheng

    2001-09-01

    Fingerprint recognition is an important subject in biometrics to identify or verify persons by physiological characteristics, and has found wide applications in different domains. In the present paper, we present a finger recognition system that combines singular points and structures. The principal steps of processing in our system are: preprocessing and ridge segmentation, singular point extraction and selection, graph representation, and finger recognition by graphs matching. Our fingerprint recognition system is implemented and tested for many fingerprint images and the experimental result are satisfactory. Different techniques are used in our system, such as fast calculation of orientation field, local fuzzy dynamical thresholding, algebraic analysis of connections and fingerprints representation and matching by graphs. Wed find that for fingerprint database that is not very large, the recognition rate is very high even without using a prior coarse category classification. This system works well for both one-to-few and one-to-many problems.

  7. Noisy Ocular Recognition Based on Three Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Min Beom Lee

    2017-12-01

    Full Text Available In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera, specular reflection (SR and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs. Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II training dataset (selected from the university of Beira iris (UBIRIS.v2 database, mobile iris challenge evaluation (MICHE database, and institute of automation of Chinese academy of sciences (CASIA-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  8. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    Science.gov (United States)

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  9. Optical character recognition based on nonredundant correlation measurements.

    Science.gov (United States)

    Braunecker, B; Hauck, R; Lohmann, A W

    1979-08-15

    The essence of character recognition is a comparison between the unknown character and a set of reference patterns. Usually, these reference patterns are all possible characters themselves, the whole alphabet in the case of letter characters. Obviously, N analog measurements are highly redundant, since only K = log(2)N binary decisions are enough to identify one out of N characters. Therefore, we devised K reference patterns accordingly. These patterns, called principal components, are found by digital image processing, but used in an optical analog computer. We will explain the concept of principal components, and we will describe experiments with several optical character recognition systems, based on this concept.

  10. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  11. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  12. Cross domains Arabic named entity recognition system

    Science.gov (United States)

    Al-Ahmari, S. Saad; Abdullatif Al-Johar, B.

    2016-07-01

    Named Entity Recognition (NER) plays an important role in many Natural Language Processing (NLP) applications such as; Information Extraction (IE), Question Answering (QA), Text Clustering, Text Summarization and Word Sense Disambiguation. This paper presents the development and implementation of domain independent system to recognize three types of Arabic named entities. The system works based on a set of domain independent grammar-rules along with Arabic part of speech tagger in addition to gazetteers and lists of trigger words. The experimental results shown, that the system performed as good as other systems with better results in some cases of cross-domains corpora.

  13. Virtualized Network Function Orchestration System and Experimental Network Based QR Recognition for a 5G Mobile Access Network

    Directory of Open Access Journals (Sweden)

    Misun Ahn

    2017-12-01

    Full Text Available This paper proposes a virtualized network function orchestration system based on Network Function Virtualization (NFV, one of the main technologies in 5G mobile networks. This system should provide connectivity between network devices and be able to create flexible network function and distribution. This system focuses more on access networks. By experimenting with various scenarios of user service established and activated in a network, we examine whether rapid adoption of new service is possible and whether network resources can be managed efficiently. The proposed method is based on Bluetooth transfer technology and mesh networking to provide automatic connections between network machines and on a Docker flat form, which is a container virtualization technology for setting and managing key functions. Additionally, the system includes a clustering and recovery measure regarding network function based on the Docker platform. We will briefly introduce the QR code perceived service as a user service to examine the proposal and based on this given service, we evaluate the function of the proposal and present analysis. Through the proposed approach, container relocation has been implemented according to a network device’s CPU usage and we confirm successful service through function evaluation on a real test bed. We estimate QR code recognition speed as the amount of network equipment is gradually increased, improving user service and confirm that the speed of recognition is increased as the assigned number of network devices is increased by the user service.

  14. Posture recognition based on fuzzy logic for home monitoring of the elderly.

    Science.gov (United States)

    Brulin, Damien; Benezeth, Yannick; Courtial, Estelle

    2012-09-01

    We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.

  15. Deep Belief Networks Based Toponym Recognition for Chinese Text

    Directory of Open Access Journals (Sweden)

    Shu Wang

    2018-06-01

    Full Text Available In Geographical Information Systems, geo-coding is used for the task of mapping from implicitly geo-referenced data to explicitly geo-referenced coordinates. At present, an enormous amount of implicitly geo-referenced information is hidden in unstructured text, e.g., Wikipedia, social data and news. Toponym recognition is the foundation of mining this useful geo-referenced information by identifying words as toponyms in text. In this paper, we propose an adapted toponym recognition approach based on deep belief network (DBN by exploring two key issues: word representation and model interpretation. A Skip-Gram model is used in the word representation process to represent words with contextual information that are ignored by current word representation models. We then determine the core hyper-parameters of the DBN model by illustrating the relationship between the performance and the hyper-parameters, e.g., vector dimensionality, DBN structures and probability thresholds. The experiments evaluate the performance of the Skip-Gram model implemented by the Word2Vec open-source tool, determine stable hyper-parameters and compare our approach with a conditional random field (CRF based approach. The experimental results show that the DBN model outperforms the CRF model with smaller corpus. When the corpus size is large enough, their statistical metrics become approaching. However, their recognition results express differences and complementarity on different kinds of toponyms. More importantly, combining their results can directly improve the performance of toponym recognition relative to their individual performances. It seems that the scale of the corpus has an obvious effect on the performance of toponym recognition. Generally, there is no adequate tagged corpus on specific toponym recognition tasks, especially in the era of Big Data. In conclusion, we believe that the DBN-based approach is a promising and powerful method to extract geo

  16. Type I-E CRISPR-Cas Systems Discriminate Target from Non-Target DNA through Base Pairing-Independent PAM Recognition

    Science.gov (United States)

    Datsenko, Kirill A.; Jackson, Ryan N.; Wiedenheft, Blake; Severinov, Konstantin; Brouns, Stan J. J.

    2013-01-01

    Discriminating self and non-self is a universal requirement of immune systems. Adaptive immune systems in prokaryotes are centered around repetitive loci called CRISPRs (clustered regularly interspaced short palindromic repeat), into which invader DNA fragments are incorporated. CRISPR transcripts are processed into small RNAs that guide CRISPR-associated (Cas) proteins to invading nucleic acids by complementary base pairing. However, to avoid autoimmunity it is essential that these RNA-guides exclusively target invading DNA and not complementary DNA sequences (i.e., self-sequences) located in the host's own CRISPR locus. Previous work on the Type III-A CRISPR system from Staphylococcus epidermidis has demonstrated that a portion of the CRISPR RNA-guide sequence is involved in self versus non-self discrimination. This self-avoidance mechanism relies on sensing base pairing between the RNA-guide and sequences flanking the target DNA. To determine if the RNA-guide participates in self versus non-self discrimination in the Type I-E system from Escherichia coli we altered base pairing potential between the RNA-guide and the flanks of DNA targets. Here we demonstrate that Type I-E systems discriminate self from non-self through a base pairing-independent mechanism that strictly relies on the recognition of four unchangeable PAM sequences. In addition, this work reveals that the first base pair between the guide RNA and the PAM nucleotide immediately flanking the target sequence can be disrupted without affecting the interference phenotype. Remarkably, this indicates that base pairing at this position is not involved in foreign DNA recognition. Results in this paper reveal that the Type I-E mechanism of avoiding self sequences and preventing autoimmunity is fundamentally different from that employed by Type III-A systems. We propose the exclusive targeting of PAM-flanked sequences to be termed a target versus non-target discrimination mechanism. PMID:24039596

  17. Pattern Recognition-Based Analysis of COPD in CT

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs

    recognition part is used to turn the texture measures, measured in a CT image of the lungs, into a quantitative measure of disease. This is done by applying a classifier that is trained on a training set of data examples with known lung tissue patterns. Different classification systems are considered, and we...... will in particular use the pattern recognition concepts of supervised learning, multiple instance learning, and dissimilarity representation-based classification. The proposed texture-based measures are applied to CT data from two different sources, one comprising low dose CT slices from subjects with manually...... annotated regions of emphysema and healthy tissue, and one comprising volumetric low dose CT images from subjects that are either healthy or suffer from COPD. Several experiments demonstrate that it is clearly beneficial to take the lung tissue texture into account when classifying or quantifying emphysema...

  18. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  19. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Schuster Jeffrey

    2006-01-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  20. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Alex K. Jones

    2006-11-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  1. Features fusion based approach for handwritten Gujarati character recognition

    Directory of Open Access Journals (Sweden)

    Ankit Sharma

    2017-02-01

    Full Text Available Handwritten character recognition is a challenging area of research. Lots of research activities in the area of character recognition are already done for Indian languages such as Hindi, Bangla, Kannada, Tamil and Telugu. Literature review on handwritten character recognition indicates that in comparison with other Indian scripts research activities on Gujarati handwritten character recognition are very less.  This paper aims to bring Gujarati character recognition in attention. Recognition of isolated Gujarati handwritten characters is proposed using three different kinds of features and their fusion. Chain code based, zone based and projection profiles based features are utilized as individual features. One of the significant contribution of proposed work is towards the generation of large and representative dataset of 88,000 handwritten Gujarati characters. Experiments are carried out on this developed dataset. Artificial Neural Network (ANN, Support Vector Machine (SVM and Naive Bayes (NB classifier based methods are implemented for handwritten Gujarati character recognition. Experimental results show substantial enhancement over state-of-the-art and authenticate our proposals.

  2. Emotion recognition based on multiple order features using fractional Fourier transform

    Science.gov (United States)

    Ren, Bo; Liu, Deyin; Qi, Lin

    2017-07-01

    In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.

  3. AN ILLUMINATION INVARIANT TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    K. Meena

    2013-11-01

    Full Text Available Automatic face recognition remains an interesting but challenging computer vision open problem. Poor illumination is considered as one of the major issue, since illumination changes cause large variation in the facial features. To resolve this, illumination normalization preprocessing techniques are employed in this paper to enhance the face recognition rate. The methods such as Histogram Equalization (HE, Gamma Intensity Correction (GIC, Normalization chain and Modified Homomorphic Filtering (MHF are used for preprocessing. Owing to great success, the texture features are commonly used for face recognition. But these features are severely affected by lighting changes. Hence texture based models Local Binary Pattern (LBP, Local Derivative Pattern (LDP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs are experimented under different lighting conditions. In this paper, illumination invariant face recognition technique is developed based on the fusion of illumination preprocessing with local texture descriptors. The performance has been evaluated using YALE B and CMU-PIE databases containing more than 1500 images. The results demonstrate that MHF based normalization gives significant improvement in recognition rate for the face images with large illumination conditions.

  4. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    Directory of Open Access Journals (Sweden)

    Henrik Karstoft

    2012-03-01

    Full Text Available Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis. The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs, which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision and a reasonable recognition of flushing (79–86%, 66–80% and landing behaviour(73–91%, 79–92%. The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linearcapabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of awildlife management system.

  5. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  6. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  7. Recognition-Based Pedagogy: Teacher Candidates' Experience of Deficit

    Science.gov (United States)

    Parkison, Paul T.; DaoJensen, Thuy

    2014-01-01

    This study seeks to introduce what we call "recognition-based pedagogy" as a conceptual frame through which teachers and instructors can collaboratively develop educative experiences with students. Recognition-based pedagogy connects the theories of critical pedagogy, identity politics, and the politics of recognition with the educative…

  8. Application of the new pattern recognition system in the new e-nose to detecting Chinese spirits

    International Nuclear Information System (INIS)

    Gu Yu; Li Qiang

    2014-01-01

    We present a new pattern recognition system based on moving average and linear discriminant analysis (LDA), which can be used to process the original signal of the new polymer quartz piezoelectric crystal air-sensitive sensor system we designed, called the new e-nose. Using the new e-nose, we obtain the template datum of Chinese spirits via a new pattern recognition system. To verify the effectiveness of the new pattern recognition system, we select three kinds of Chinese spirits to test, our results confirm that the new pattern recognition system can perfectly identify and distinguish between the Chinese spirits. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  9. Extending the Capture Volume of an Iris Recognition System Using Wavefront Coding and Super-Resolution.

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen

    2016-12-01

    Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.

  10. Utilization-based object recognition in confined spaces

    Science.gov (United States)

    Shirkhodaie, Amir; Telagamsetti, Durga; Chan, Alex L.

    2017-05-01

    Recognizing substantially occluded objects in confined spaces is a very challenging problem for ground-based persistent surveillance systems. In this paper, we discuss the ontology inference of occluded object recognition in the context of in-vehicle group activities (IVGA) and describe an approach that we refer to as utilization-based object recognition method. We examine the performance of three types of classifiers tailored for the recognition of objects with partial visibility, namely, (1) Hausdorff Distance classifier, (2) Hamming Network classifier, and (3) Recurrent Neural Network classifier. In order to train these classifiers, we have generated multiple imagery datasets containing a mixture of common objects appearing inside a vehicle with full or partial visibility and occultation. To generate dynamic interactions between multiple people, we model the IVGA scenarios using a virtual simulation environment, in which a number of simulated actors perform a variety of IVGA tasks independently or jointly. This virtual simulation engine produces the much needed imagery datasets for the verification and validation of the efficiency and effectiveness of the selected object recognizers. Finally, we improve the performance of these object recognizers by incorporating human gestural information that differentiates various object utilization or handling methods through the analyses of dynamic human-object interactions (HOI), human-human interactions (HHI), and human-vehicle interactions (HVI) in the context of IVGA.

  11. Finger vein recognition based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Meng Gesi

    2017-01-01

    Full Text Available Biometric Authentication Technology has been widely used in this information age. As one of the most important technology of authentication, finger vein recognition attracts our attention because of its high security, reliable accuracy and excellent performance. However, the current finger vein recognition system is difficult to be applied widely because its complicated image pre-processing and not representative feature vectors. To solve this problem, a finger vein recognition method based on the convolution neural network (CNN is proposed in the paper. The image samples are directly input into the CNN model to extract its feature vector so that we can make authentication by comparing the Euclidean distance between these vectors. Finally, the Deep Learning Framework Caffe is adopted to verify this method. The result shows that there are great improvements in both speed and accuracy rate compared to the previous research. And the model has nice robustness in illumination and rotation.

  12. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    Science.gov (United States)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  13. Finger Vein Recognition Based on Personalized Weight Maps

    Science.gov (United States)

    Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu

    2013-01-01

    Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition. PMID:24025556

  14. Finger Vein Recognition Based on Personalized Weight Maps

    Directory of Open Access Journals (Sweden)

    Lu Yang

    2013-09-01

    Full Text Available Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs. The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition.

  15. A Robust and Device-Free System for the Recognition and Classification of Elderly Activities.

    Science.gov (United States)

    Li, Fangmin; Al-Qaness, Mohammed Abdulaziz Aide; Zhang, Yong; Zhao, Bihai; Luan, Xidao

    2016-12-01

    Human activity recognition, tracking and classification is an essential trend in assisted living systems that can help support elderly people with their daily activities. Traditional activity recognition approaches depend on vision-based or sensor-based techniques. Nowadays, a novel promising technique has obtained more attention, namely device-free human activity recognition that neither requires the target object to wear or carry a device nor install cameras in a perceived area. The device-free technique for activity recognition uses only the signals of common wireless local area network (WLAN) devices available everywhere. In this paper, we present a novel elderly activities recognition system by leveraging the fluctuation of the wireless signals caused by human motion. We present an efficient method to select the correct data from the Channel State Information (CSI) streams that were neglected in previous approaches. We apply a Principle Component Analysis method that exposes the useful information from raw CSI. Thereafter, Forest Decision (FD) is adopted to classify the proposed activities and has gained a high accuracy rate. Extensive experiments have been conducted in an indoor environment to test the feasibility of the proposed system with a total of five volunteer users. The evaluation shows that the proposed system is applicable and robust to electromagnetic noise.

  16. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  17. Deep Learning based Super-Resolution for Improved Action Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Guerrero, Sergio Escalera; Rasti, Pejman

    2015-01-01

    with results of a state-of- the-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate...

  18. Body posture recognition and turning recording system for the care of bed bound patients.

    Science.gov (United States)

    Hsiao, Rong-Shue; Mi, Zhenqiang; Yang, Bo-Ru; Kau, Lih-Jen; Bitew, Mekuanint Agegnehu; Li, Tzu-Yu

    2015-01-01

    This paper proposes body posture recognition and turning recording system for assisting the care of bed bound patients in nursing homes. The system continuously detects the patient's body posture and records the length of time for each body posture. If the patient remains in the same body posture long enough to develop pressure ulcers, the system notifies caregivers to change the patient's body posture. The objective of recording is to provide the log of body turning for querying of patients' family members. In order to accurately detect patient's body posture, we developed a novel pressure sensing pad which contains force sensing resistor sensors. Based on the proposed pressure sensing pad, we developed a bed posture recognition module which includes a bed posture recognition algorithm. The algorithm is based on fuzzy theory. The body posture recognition algorithm can detect the patient's bed posture whether it is right lateral decubitus, left lateral decubitus, or supine. The detected information of patient's body posture can be then transmitted to the server of healthcare center by the communication module to perform the functions of recording and notification. Experimental results showed that the average posture recognition accuracy for our proposed module is 92%.

  19. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej

    2013-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.

  20. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  1. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  2. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    Science.gov (United States)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  3. Artificial Neural Network Based Optical Character Recognition

    OpenAIRE

    Vivek Shrivastava; Navdeep Sharma

    2012-01-01

    Optical Character Recognition deals in recognition and classification of characters from an image. For the recognition to be accurate, certain topological and geometrical properties are calculated, based on which a character is classified and recognized. Also, the Human psychology perceives characters by its overall shape and features such as strokes, curves, protrusions, enclosures etc. These properties, also called Features are extracted from the image by means of spatial pixel-...

  4. Extraction Of Audio Features For Emotion Recognition System Based On Music

    Directory of Open Access Journals (Sweden)

    Kee Moe Han

    2015-08-01

    Full Text Available Music is the combination of melody linguistic information and the vocalists emotion. Since music is a work of art analyzing emotion in music by computer is a difficult task. Many approaches have been developed to detect the emotions included in music but the results are not satisfactory because emotion is very complex. In this paper the evaluations of audio features from the music files are presented. The extracted features are used to classify the different emotion classes of the vocalists. Musical features extraction is done by using Music Information Retrieval MIR tool box in this paper. The database of 100 music clips are used to classify the emotions perceived in music clips. Music may contain many emotions according to the vocalists mood such as happy sad nervous bored peace etc. In this paper the audio features related to the emotions of the vocalists are extracted to use in emotion recognition system based on music.

  5. Secure method for biometric-based recognition with integrated cryptographic functions.

    Science.gov (United States)

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  6. Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions

    Directory of Open Access Journals (Sweden)

    Shin-Yan Chiou

    2013-01-01

    Full Text Available Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  7. New neural-networks-based 3D object recognition system

    Science.gov (United States)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  8. Biometric Features in Person Recognition Systems

    Directory of Open Access Journals (Sweden)

    Edgaras Ivanovas

    2011-03-01

    Full Text Available Lately a lot of research effort is devoted for recognition of a human being using his biometric characteristics. Biometric recognition systems are used in various applications, e. g., identification for state border crossing or firearm, which allows only enrolled persons to use it. In this paper biometric characteristics and their properties are reviewed. Development of high accuracy system requires distinctive and permanent characteristics, whereas development of user friendly system requires collectable and acceptable characteristics. It is showed that properties of biometric characteristics do not influence research effort significantly. Properties of biometric characteristic features and their influence are discussed.Article in Lithuanian

  9. Sensor-Based Activity Recognition with Dynamically Added Context

    Directory of Open Access Journals (Sweden)

    Jiahui Wen

    2015-08-01

    Full Text Available An activity recognition system essentially processes raw sensor data and maps them into latent activity classes. Most of the previous systems are built with supervised learning techniques and pre-defined data sources, and result in static models. However, in realistic and dynamic environments, original data sources may fail and new data sources become available, a robust activity recognition system should be able to perform evolution automatically with dynamic sensor availability in dynamic environments. In this paper, we propose methods that automatically incorporate dynamically available data sources to adapt and refine the recognition system at run-time. The system is built upon ensemble classifiers which can automatically choose the features with the most discriminative power. Extensive experimental results with publicly available datasets demonstrate the effectiveness of our methods.

  10. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  11. Material recognition based on thermal cues: Mechanisms and applications.

    Science.gov (United States)

    Ho, Hsin-Ni

    2018-01-01

    Some materials feel colder to the touch than others, and we can use this difference in perceived coldness for material recognition. This review focuses on the mechanisms underlying material recognition based on thermal cues. It provides an overview of the physical, perceptual, and cognitive processes involved in material recognition. It also describes engineering domains in which material recognition based on thermal cues have been applied. This includes haptic interfaces that seek to reproduce the sensations associated with contact in virtual environments and tactile sensors aim for automatic material recognition. The review concludes by considering the contributions of this line of research in both science and engineering.

  12. Possibility of object recognition using Altera's model based design approach

    International Nuclear Information System (INIS)

    Tickle, A J; Harvey, P K; Smith, J S; Wu, F

    2009-01-01

    Object recognition is an image processing task of finding a given object in a selected image or video sequence. Object recognition can be divided into two areas: one of these is decision-theoretic and deals with patterns described by quantitative descriptors, for example such as length, area, shape and texture. With this Graphical User Interface Circuitry (GUIC) methodology employed here being relatively new for object recognition systems, the aim of this work is to identify if the developed circuitry can detect certain shapes or strings within the target image. A much smaller reference image feeds the preset data for identification, tests are conducted for both binary and greyscale and the additional mathematical morphology to highlight the area within the target image with the object(s) are located is also presented. This then provides proof that basic recognition methods are valid and would allow the progression to developing decision-theoretical and learning based approaches using GUICs for use in multidisciplinary tasks.

  13. New pattern recognition system in the e-nose for Chinese spirit identification

    International Nuclear Information System (INIS)

    Zeng Hui; Li Qiang; Gu Yu

    2016-01-01

    This paper presents a new pattern recognition system for Chinese spirit identification by using the polymer quartz piezoelectric crystal sensor based e-nose. The sensors are designed based on quartz crystal microbalance (QCM) principle, and they could capture different vibration frequency signal values for Chinese spirit identification. For each sensor in an 8-channel sensor array, seven characteristic values of the original vibration frequency signal values, i.e., average value (A), root-mean-square value (RMS), shape factor value (S f ), crest factor value (C f ), impulse factor value (I f ), clearance factor value (CL f ), kurtosis factor value (K v ) are first extracted. Then the dimension of the characteristic values is reduced by the principle components analysis (PCA) method. Finally the back propagation (BP) neutral network algorithm is used to recognize Chinese spirits. The experimental results show that the recognition rate of six kinds of Chinese spirits is 93.33% and our proposed new pattern recognition system can identify Chinese spirits effectively. (paper)

  14. Military personnel recognition system using texture, colour, and SURF features

    Science.gov (United States)

    Irhebhude, Martins E.; Edirisinghe, Eran A.

    2014-06-01

    This paper presents an automatic, machine vision based, military personnel identification and classification system. Classification is done using a Support Vector Machine (SVM) on sets of Army, Air Force and Navy camouflage uniform personnel datasets. In the proposed system, the arm of service of personnel is recognised by the camouflage of a persons uniform, type of cap and the type of badge/logo. The detailed analysis done include; camouflage cap and plain cap differentiation using gray level co-occurrence matrix (GLCM) texture feature; classification on Army, Air Force and Navy camouflaged uniforms using GLCM texture and colour histogram bin features; plain cap badge classification into Army, Air Force and Navy using Speed Up Robust Feature (SURF). The proposed method recognised camouflage personnel arm of service on sets of data retrieved from google images and selected military websites. Correlation-based Feature Selection (CFS) was used to improve recognition and reduce dimensionality, thereby speeding the classification process. With this method success rates recorded during the analysis include 93.8% for camouflage appearance category, 100%, 90% and 100% rates of plain cap and camouflage cap categories for Army, Air Force and Navy categories, respectively. Accurate recognition was recorded using SURF for the plain cap badge category. Substantial analysis has been carried out and results prove that the proposed method can correctly classify military personnel into various arms of service. We show that the proposed method can be integrated into a face recognition system, which will recognise personnel in addition to determining the arm of service which the personnel belong. Such a system can be used to enhance the security of a military base or facility.

  15. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Regina Lionnie

    2013-09-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing  methods  used  in  a  hand  gesture  recognition  system.  The  preprocessing methods are based on the combinations ofseveral image processing operations,  namely  edge  detection,  low  pass  filtering,  histogram  equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possibleclasses. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  16. REAL-TIME FACE RECOGNITION BASED ON OPTICAL FLOW AND HISTOGRAM EQUALIZATION

    Directory of Open Access Journals (Sweden)

    D. Sathish Kumar

    2013-05-01

    Full Text Available Face recognition is one of the intensive areas of research in computer vision and pattern recognition but many of which are focused on recognition of faces under varying facial expressions and pose variation. A constrained optical flow algorithm discussed in this paper, recognizes facial images involving various expressions based on motion vector computation. In this paper, an optical flow computation algorithm which computes the frames of varying facial gestures, and integrating with synthesized image in a probabilistic environment has been proposed. Also Histogram Equalization technique has been used to overcome the effect of illuminations while capturing the input data using camera devices. It also enhances the contrast of the image for better processing. The experimental results confirm that the proposed face recognition system is more robust and recognizes the facial images under varying expressions and pose variations more accurately.

  17. Model-based vision system for automatic recognition of structures in dental radiographs

    Science.gov (United States)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  18. Automated alignment system for optical wireless communication systems using image recognition.

    Science.gov (United States)

    Brandl, Paul; Weiss, Alexander; Zimmermann, Horst

    2014-07-01

    In this Letter, we describe the realization of a tracked line-of-sight optical wireless communication system for indoor data distribution. We built a laser-based transmitter with adaptive focus and ray steering by a microelectromechanical systems mirror. To execute the alignment procedure, we used a CMOS image sensor at the transmitter side and developed an algorithm for image recognition to localize the receiver's position. The receiver is based on a self-developed optoelectronic integrated chip with low requirements on the receiver optics to make the system economically attractive. With this system, we were able to set up the communication link automatically without any back channel and to perform error-free (bit error rate <10⁻⁹) data transmission over a distance of 3.5 m with a data rate of 3 Gbit/s.

  19. Fast Pedestrian Recognition Based on Multisensor Fusion

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2012-01-01

    Full Text Available A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG feature extraction method is proposed later. A support vector machine (SVM classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.

  20. Automated pattern recognition system for noise analysis

    International Nuclear Information System (INIS)

    Sides, W.H. Jr.; Piety, K.R.

    1980-01-01

    A pattern recognition system was developed at ORNL for on-line monitoring of noise signals from sensors in a nuclear power plant. The system continuousy measures the power spectral density (PSD) values of the signals and the statistical characteristics of the PSDs in unattended operation. Through statistical comparison of current with past PSDs (pattern recognition), the system detects changes in the noise signals. Because the noise signals contain information about the current operational condition of the plant, a change in these signals could indicate a change, either normal or abnormal, in the operational condition

  1. Development of Vision Based Multiview Gait Recognition System with MMUGait Database

    Directory of Open Access Journals (Sweden)

    Hu Ng

    2014-01-01

    Full Text Available This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.

  2. Towards NIRS-based hand movement recognition.

    Science.gov (United States)

    Paleari, Marco; Luciani, Riccardo; Ariano, Paolo

    2017-07-01

    This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.

  3. Dynamic Gesture Recognition with a Terahertz Radar Based on Range Profile Sequences and Doppler Signatures.

    Science.gov (United States)

    Zhou, Zhi; Cao, Zongjie; Pi, Yiming

    2017-12-21

    The frequency of terahertz radar ranges from 0.1 THz to 10 THz, which is higher than that of microwaves. Multi-modal signals, including high-resolution range profile (HRRP) and Doppler signatures, can be acquired by the terahertz radar system. These two kinds of information are commonly used in automatic target recognition; however, dynamic gesture recognition is rarely discussed in the terahertz regime. In this paper, a dynamic gesture recognition system using a terahertz radar is proposed, based on multi-modal signals. The HRRP sequences and Doppler signatures were first achieved from the radar echoes. Considering the electromagnetic scattering characteristics, a feature extraction model is designed using location parameter estimation of scattering centers. Dynamic Time Warping (DTW) extended to multi-modal signals is used to accomplish the classifications. Ten types of gesture signals, collected from a terahertz radar, are applied to validate the analysis and the recognition system. The results of the experiment indicate that the recognition rate reaches more than 91%. This research verifies the potential applications of dynamic gesture recognition using a terahertz radar.

  4. Active Multimodal Sensor System for Target Recognition and Tracking.

    Science.gov (United States)

    Qu, Yufu; Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-06-28

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

  5. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar.

    Science.gov (United States)

    Shin, Young Hoon; Seo, Jiwon

    2016-10-29

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker's vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing.

  6. Cross domains Arabic named entity recognition system

    KAUST Repository

    Al-Ahmari, S. Saad

    2016-07-11

    Named Entity Recognition (NER) plays an important role in many Natural Language Processing (NLP) applications such as; Information Extraction (IE), Question Answering (QA), Text Clustering, Text Summarization and Word Sense Disambiguation. This paper presents the development and implementation of domain independent system to recognize three types of Arabic named entities. The system works based on a set of domain independent grammar-rules along with Arabic part of speech tagger in addition to gazetteers and lists of trigger words. The experimental results shown, that the system performed as good as other systems with better results in some cases of cross-domains corpora. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  7. Cross domains Arabic named entity recognition system

    KAUST Repository

    Al-Ahmari, S. Saad; Abdullatif Al-Johar, B.

    2016-01-01

    Named Entity Recognition (NER) plays an important role in many Natural Language Processing (NLP) applications such as; Information Extraction (IE), Question Answering (QA), Text Clustering, Text Summarization and Word Sense Disambiguation. This paper presents the development and implementation of domain independent system to recognize three types of Arabic named entities. The system works based on a set of domain independent grammar-rules along with Arabic part of speech tagger in addition to gazetteers and lists of trigger words. The experimental results shown, that the system performed as good as other systems with better results in some cases of cross-domains corpora. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  8. A real time mobile-based face recognition with fisherface methods

    Science.gov (United States)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  9. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    Science.gov (United States)

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  10. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    Science.gov (United States)

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  11. Towards evidence-based, quality-controlled health promotion: the Dutch recognition system for health promotion interventions.

    NARCIS (Netherlands)

    Brug, J.; Dale, D. van; Lanting, L.; Kremers, S.; Veenhof, C.; Leurs, M.; Yperen, T. van; Kok, G.

    2010-01-01

    Registration or recognition systems for best-practice health promotion interventions may contribute to better quality assurance and control in health promotion practice. In the Netherlands, such a system has been developed and is being implemented aiming to provide policy makers and professionals

  12. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    Science.gov (United States)

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  13. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  14. Wavelet-based ground vehicle recognition using acoustic signals

    Science.gov (United States)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  15. Recognition of risk situations based on endoscopic instrument tracking and knowledge based situation modeling

    Science.gov (United States)

    Speidel, Stefanie; Sudra, Gunther; Senemaud, Julien; Drentschew, Maximilian; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2008-03-01

    Minimally invasive surgery has gained significantly in importance over the last decade due to the numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality (AR) techniques. In order to generate a context-aware assistance it is necessary to recognize the current state of the intervention using intraoperatively gained sensor data and a model of the surgical intervention. In this paper we present the recognition of risk situations, the system warns the surgeon if an instrument gets too close to a risk structure. The context-aware assistance system starts with an image-based analysis to retrieve information from the endoscopic images. This information is classified and a semantic description is generated. The description is used to recognize the current state and launch an appropriate AR visualization. In detail we present an automatic vision-based instrument tracking to obtain the positions of the instruments. Situation recognition is performed using a knowledge representation based on a description logic system. Two augmented reality visualization programs are realized to warn the surgeon if a risk situation occurs.

  16. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System

    Directory of Open Access Journals (Sweden)

    Fernando Castaño

    2017-09-01

    Full Text Available Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.. The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors’ knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.

  17. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System.

    Science.gov (United States)

    Castaño, Fernando; Beruvides, Gerardo; Haber, Rodolfo E; Artuñedo, Antonio

    2017-09-14

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors' knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.

  18. AN EFFICIENT SELF-UPDATING FACE RECOGNITION SYSTEM FOR PLASTIC SURGERY FACE

    Directory of Open Access Journals (Sweden)

    A. Devi

    2016-08-01

    Full Text Available Facial recognition system is fundamental a computer application for the automatic identification of a person through a digitized image or a video source. The major cause for the overall poor performance is related to the transformations in appearance of the user based on the aspects akin to ageing, beard growth, sun-tan etc. In order to overcome the above drawback, Self-update process has been developed in which, the system learns the biometric attributes of the user every time the user interacts with the system and the information gets updated automatically. The procedures of Plastic surgery yield a skilled and endurable means of enhancing the facial appearance by means of correcting the anomalies in the feature and then treating the facial skin with the aim of getting a youthful look. When plastic surgery is performed on an individual, the features of the face undergo reconstruction either locally or globally. But, the changes which are introduced new by plastic surgery remain hard to get modeled by the available face recognition systems and they deteriorate the performances of the face recognition algorithm. Hence the Facial plastic surgery produces changes in the facial features to larger extent and thereby creates a significant challenge to the face recognition system. This work introduces a fresh Multimodal Biometric approach making use of novel approaches to boost the rate of recognition and security. The proposed method consists of various processes like Face segmentation using Active Appearance Model (AAM, Face Normalization using Kernel Density Estimate/ Point Distribution Model (KDE-PDM, Feature extraction using Local Gabor XOR Patterns (LGXP and Classification using Independent Component Analysis (ICA. Efficient techniques have been used in each phase of the FRAS in order to obtain improved results.

  19. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  20. Multi-font printed Mongolian document recognition system

    Science.gov (United States)

    Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming

    2009-01-01

    Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.

  1. A vision-based automated guided vehicle system with marker recognition for indoor use.

    Science.gov (United States)

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  2. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  3. A general framework for sensor-based human activity recognition.

    Science.gov (United States)

    Köping, Lukas; Shirahama, Kimiaki; Grzegorzek, Marcin

    2018-04-01

    Today's wearable devices like smartphones, smartwatches and intelligent glasses collect a large amount of data from their built-in sensors like accelerometers and gyroscopes. These data can be used to identify a person's current activity and in turn can be utilised for applications in the field of personal fitness assistants or elderly care. However, developing such systems is subject to certain restrictions: (i) since more and more new sensors will be available in the future, activity recognition systems should be able to integrate these new sensors with a small amount of manual effort and (ii) such systems should avoid high acquisition costs for computational power. We propose a general framework that achieves an effective data integration based on the following two characteristics: Firstly, a smartphone is used to gather and temporally store data from different sensors and transfer these data to a central server. Thus, various sensors can be integrated into the system as long as they have programming interfaces to communicate with the smartphone. The second characteristic is a codebook-based feature learning approach that can encode data from each sensor into an effective feature vector only by tuning a few intuitive parameters. In the experiments, the framework is realised as a real-time activity recognition system that integrates eight sensors from a smartphone, smartwatch and smartglasses, and its effectiveness is validated from different perspectives such as accuracies, sensor combinations and sampling rates. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A modified artificial immune system based pattern recognition approach -- an application to clinic diagnostics

    Science.gov (United States)

    Zhao, Weixiang; Davis, Cristina E.

    2011-01-01

    Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for

  5. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  6. Face recognition based on matching of local features on 3D dynamic range sequences

    Science.gov (United States)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  7. PALESTINE AUTOMOTIVE LICENSE IDENTITY RECOGNITION FOR INTELLIGENT PARKING SYSTEM

    Directory of Open Access Journals (Sweden)

    ANEES ABU SNEINEH

    2017-05-01

    Full Text Available Providing employees with protection and security is one of the key concerns of any organization. This goal can be implemented mainly by managing and protecting employees’ cars in the parking area. Therefore, a parking area must be managed and organized with smart technologies and tools that can be applied and integrated in an intelligent parking system. This paper presents the tools based on image recognition technology that can be used to effectively control various parts of a parking system. An intelligent automotive parking system is effectively implemented by integrating image processing technologies and an Arduino controller. Results show that intelligent parking is successfully implemented based on car ID image capture to meet the need for managing and organizing car parking systems.

  8. Dynamic Recognition of Driver’s Propensity Based on GPS Mobile Sensing Data and Privacy Protection

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2016-01-01

    Full Text Available Driver’s propensity is a dynamic measurement of driver’s emotional preference characteristics in driving process. It is a core parameter to compute driver’s intention and consciousness in safety driving assist system, especially in vehicle collision warning system. It is also an important influence factor to achieve the Driver-Vehicle-Environment Collaborative Wisdom and Control macroscopically. In this paper, dynamic recognition model of driver’s propensity based on support vector machine is established taking the vehicle safety controlled technology and respecting and protecting the driver’s privacy as precondition. The experiment roads travel time obtained through GPS is taken as the characteristic parameter. The sensing information of Driver-Vehicle-Environment was obtained through psychological questionnaire tests, real vehicle experiments, and virtual driving experiments, and the information is used for parameter calibration and validation of the model. Results show that the established recognition model of driver’s propensity is reasonable and feasible, which can achieve the dynamic recognition of driver’s propensity to some extent. The recognition model provides reference and theoretical basis for personalized vehicle active safety systems taking people as center especially for the vehicle safety technology based on the networking.

  9. Towards PLDA-RBM based speaker recognition in mobile environment: Designing stacked/deep PLDA-RBM systems

    DEFF Research Database (Denmark)

    Nautsch, Andreas; Hao, Hong; Stafylakis, Themos

    2016-01-01

    recognition: two deep architectures are presented and examined, which aim at suppressing channel effects and recovering speaker-discriminative information on back-ends trained on a small dataset. Experiments are carried out on the MOBIO SRE'13 database, which is a challenging and publicly available dataset...... for mobile speaker recognition with limited amounts of training data. The experiments show that the proposed system outperforms the baseline i-vector/PLDA approach by relative gains of 31% on female and 9% on male speakers in terms of half total error rate....

  10. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    Science.gov (United States)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  11. Intrusion recognition for optic fiber vibration sensor based on the selective attention mechanism

    Science.gov (United States)

    Xu, Haiyan; Xie, Yingjuan; Li, Min; Zhang, Zhuo; Zhang, Xuewu

    2017-11-01

    Distributed fiber-optic vibration sensors receive extensive investigation and play a significant role in the sensor panorama. A fiber optic perimeter detection system based on all-fiber interferometric sensor is proposed, through the back-end analysis, processing and intelligent identification, which can distinguish effects of different intrusion activities. In this paper, an intrusion recognition based on the auditory selective attention mechanism is proposed. Firstly, considering the time-frequency of vibration, the spectrogram is calculated. Secondly, imitating the selective attention mechanism, the color, direction and brightness map of the spectrogram is computed. Based on these maps, the feature matrix is formed after normalization. The system could recognize the intrusion activities occurred along the perimeter sensors. Experiment results show that the proposed method for the perimeter is able to differentiate intrusion signals from ambient noises. What's more, the recognition rate of the system is improved while deduced the false alarm rate, the approach is proved by large practical experiment and project.

  12. A knowledge-based approach for recognition of handwritten Pitman ...

    Indian Academy of Sciences (India)

    The paper describes a knowledge-based approach for the recognition of PSL strokes. Information about location and the direction of the starting point and final point of strokes are considered the knowledge base for recognition of strokes. The work comprises preprocessing, determination of starting and final points, ...

  13. Matching score based face recognition

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a new method: matching score based face registration, which searches for optimal alignment by maximizing the matching score output of a classifier as a function of the different

  14. Facial Expression Recognition Based on TensorFlow Platform

    Directory of Open Access Journals (Sweden)

    Xia Xiao-Ling

    2017-01-01

    Full Text Available Facial expression recognition have a wide range of applications in human-machine interaction, pattern recognition, image understanding, machine vision and other fields. Recent years, it has gradually become a hot research. However, different people have different ways of expressing their emotions, and under the influence of brightness, background and other factors, there are some difficulties in facial expression recognition. In this paper, based on the Inception-v3 model of TensorFlow platform, we use the transfer learning techniques to retrain facial expression dataset (The Extended Cohn-Kanade dataset, which can keep the accuracy of recognition and greatly reduce the training time.

  15. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  16. Iris analysis for biometric recognition systems

    CERN Document Server

    Bodade, Rajesh M

    2014-01-01

    The book presents three most significant areas in Biometrics and Pattern Recognition. A step-by-step approach for design and implementation of Dual Tree Complex Wavelet Transform (DTCWT) plus Rotated Complex Wavelet Filters (RCWF) is discussed in detail. In addition to the above, the book provides detailed analysis of iris images and two methods of iris segmentation. It also discusses simplified study of some subspace-based methods and distance measures for iris recognition backed by empirical studies and statistical success verifications.

  17. Smartphone based face recognition tool for the blind.

    Science.gov (United States)

    Kramer, K M; Hedin, D S; Rolkosky, D J

    2010-01-01

    The inability to identify people during group meetings is a disadvantage for blind people in many professional and educational situations. To explore the efficacy of face recognition using smartphones in these settings, we have prototyped and tested a face recognition tool for blind users. The tool utilizes Smartphone technology in conjunction with a wireless network to provide audio feedback of the people in front of the blind user. Testing indicated that the face recognition technology can tolerate up to a 40 degree angle between the direction a person is looking and the camera's axis and a 96% success rate with no false positives. Future work will be done to further develop the technology for local face recognition on the smartphone in addition to remote server based face recognition.

  18. Developing a Credit Recognition System for Chinese Higher Education Institutions

    Science.gov (United States)

    Li, Fuhui

    2015-01-01

    In recent years, a credit recognition system has been developing in Chinese higher education institutions. Much research has been done on this development, but it has been concentrated on system building, barriers/issues and international practices. The relationship between credit recognition system reforms and democratisation of higher education…

  19. Fast and Low-Cost Mechatronic Recognition System for Persian Banknotes

    Directory of Open Access Journals (Sweden)

    Majid Behjat

    2014-03-01

    Full Text Available In this paper, we designed a fast and low-cost mechatronic system for recognition of eight current Persian banknotes in circulation. Firstly, we proposed a mechanical solution for avoiding extra processing time caused by detecting the place of banknote and paper angle correction in an input image. We also defined new parameters for feature extraction, including colour features (RGBR values, size features (LWR and texture features (CRLVR value. Then, we used a Multi-Layer Perceptron (MLP neural network in the recognition phase to reduce the necessary processing time. In this research, we collected a perfect database of Persian banknote images (about 4000 double-sided prevalent images. We reached about 99.06% accuracy (average for each side in final banknote recognition by testing 800 different worn, torn and new banknotes which were not part of the initial learning phase. This accuracy could increase to 99.62% in double-sided decision mode. Finally, we designed an ATmega32 microcontroller-based hardware with 16MHz clock frequency for implementation of our proposed system which can recognize sample banknotes at about 480ms and 560ms for single-sided detection and double-sided detection respectively, after image scanning.

  20. Cluster-Based Adaptation Using Density Forest for HMM Phone Recognition

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    The dissimilarity between the training and test data in speech recognition systems is known to have a considerable effect on the recognition accuracy. To solve this problem, we use density forest to cluster the data and use maximum a posteriori (MAP) method to build a cluster-based adapted Gaussian...... mixture models (GMMs) in HMM speech recognition. Specifically, a set of bagged versions of the training data for each state in the HMM is generated, and each of these versions is used to generate one GMM and one tree in the density forest. Thereafter, an acoustic model forest is built by replacing...... the data of each leaf (cluster) in each tree with the corresponding GMM adapted by the leaf data using the MAP method. The results show that the proposed approach achieves 3:8% (absolute) lower phone error rate compared with the standard HMM/GMM and 0:8% (absolute) lower PER compared with bagged HMM/GMM....

  1. Automated Degradation Diagnosis in Character Recognition System Subject to Camera Vibration

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2014-01-01

    Full Text Available Degradation diagnosis plays an important role for degraded character processing, which can tell the recognition difficulty of a given degraded character. In this paper, we present a framework for automated degraded character recognition system by statistical syntactic approach using 3D primitive symbol, which is integrated by degradation diagnosis to provide accurate and reliable recognition results. Our contribution is to design the framework to build the character recognition submodels corresponding to degradation subject to camera vibration or out of focus. In each character recognition submodel, statistical syntactic approach using 3D primitive symbol is proposed to improve degraded character recognition performance. In the experiments, we show attractive experimental results, highlighting the system efficiency and recognition performance by statistical syntactic approach using 3D primitive symbol on the degraded character dataset.

  2. Vehicle license plate recognition based on geometry restraints and multi-feature decision

    Science.gov (United States)

    Wu, Jianwei; Wang, Zongyue

    2005-10-01

    Vehicle license plate (VLP) recognition is of great importance to many traffic applications. Though researchers have paid much attention to VLP recognition there has not been a fully operational VLP recognition system yet for many reasons. This paper discusses a valid and practical method for vehicle license plate recognition based on geometry restraints and multi-feature decision including statistical and structural features. In general, the VLP recognition includes the following steps: the location of VLP, character segmentation, and character recognition. This paper discusses the three steps in detail. The characters of VLP are always declining caused by many factors, which makes it more difficult to recognize the characters of VLP, therefore geometry restraints such as the general ratio of length and width, the adjacent edges being perpendicular are used for incline correction. Image Moment has been proved to be invariant to translation, rotation and scaling therefore image moment is used as one feature for character recognition. Stroke is the basic element for writing and hence taking it as a feature is helpful to character recognition. Finally we take the image moment, the strokes and the numbers of each stroke for each character image and some other structural features and statistical features as the multi-feature to match each character image with sample character images so that each character image can be recognized by BP neural net. The proposed method combines statistical and structural features for VLP recognition, and the result shows its validity and efficiency.

  3. Sub-pattern based multi-manifold discriminant analysis for face recognition

    Science.gov (United States)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  4. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  5. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    Science.gov (United States)

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-09-22

    realization of a practical EEG-based emotion recognition system.

  6. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Jianhai Zhang

    2016-09-01

    contribution to the realization of a practical EEG-based emotion recognition system.

  7. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  8. Lexicon Reduction for Urdu/Arabic Script Based Character Recognition: A Multilingual OCR

    Directory of Open Access Journals (Sweden)

    Saeeda Naz

    2016-04-01

    Full Text Available Arabic script character recognition is challenging task due to complexity of the script and huge number of ligatures. We present a method for the development of multilingual Arabic script OCR (Optical Character Recognition and lexicon reduction for Arabic Script and its derivative languages. The objective of the proposed method is to overcome the large dataset Urdu and similar scripts by using GCT (Ghost Character Theory concept. Arabic and its sibling script languages share the similar character dataset i.e. the character set are difference in diacritic and writing styles like Naskh or Nasta?liq. Based on the proposed method, the lexicon for Arabic and Arabic script based languages can be minimized approximately up to 20 times. The proposed multilingual Arabic script OCR approach have been evaluated for online Arabic and its derivative language like Urdu using BPNN. The result showed that proposed method helps to not only the reduction of lexicon but also helps to develop the Multilanguage character recognition system for Arabic Script.

  9. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    Science.gov (United States)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  10. Social context predicts recognition systems in ant queens

    DEFF Research Database (Denmark)

    Dreier, Stéphanie Agnès Jeanine; d'Ettorre, Patrizia

    2009-01-01

    Recognition of group-members is a key feature of sociality. Ants use chemical communication to discriminate nestmates from intruders, enhancing kin cooperation and preventing parasitism. The recognition code is embedded in their cuticular chemical profile, which typically varies between colonies....... We predicted that ants might be capable of accurate recognition in unusual situations when few individuals interact repeatedly, as new colonies started by two to three queens. Individual recognition would be favoured by selection when queens establish dominance hierarchies, because repeated fights...... for dominance are costly; but it would not evolve in absence of hierarchies. We previously showed that Pachycondyla co-founding queens, which form dominance hierarchies, have accurate individual recognition based on chemical cues. Here, we used the ant Lasius niger to test the null hypothesis that individual...

  11. Fluorescent sensor systems based on nanostructured polymeric membranes for selective recognition of Aflatoxin B1.

    Science.gov (United States)

    Sergeyeva, Tetyana; Yarynka, Daria; Piletska, Elena; Lynnik, Rostyslav; Zaporozhets, Olga; Brovko, Oleksandr; Piletsky, Sergey; El'skaya, Anna

    2017-12-01

    Nanostructured polymeric membranes for selective recognition of aflatoxin B1 were synthesized in situ and used as highly sensitive recognition elements in the developed fluorescent sensor. Artificial binding sites capable of selective recognition of aflatoxin B1 were formed in the structure of the polymeric membranes using the method of molecular imprinting. A composition of molecularly imprinted polymer (MIP) membranes was optimized using the method of computational modeling. The MIP membranes were synthesized using the non-toxic close structural analogue of aflatoxin B1, ethyl-2-oxocyclopentanecarboxylate as a dummy template. The MIP membranes with the optimized composition demonstrated extremely high selectivity towards aflatoxin B1 (AFB1). Negligible binding of close structural analogues of AFB1 - aflatoxins B2 (AFB2), aflatoxin G2 (AFG2), and ochratoxin A (OTA) was demonstrated. Binding of AFB1 by the MIP membranes was investigated as a function of both type and concentration of the functional monomer in the initial monomer composition used for the membranes' synthesis, as well as sample composition. The conditions of the solid-phase extraction of the mycotoxin using the MIP membrane as a stationary phase (pH, ionic strength, buffer concentration, volume of the solution, ratio between water and organic solvent, filtration rate) were optimized. The fluorescent sensor system based on the optimized MIP membranes provided a possibility of AFB1 detection within the range 14-500ngmL -1 demonstrating detection limit (3Ϭ) of 14ngmL -1 . The developed technique was successfully applied for the analysis of model solutions and waste waters from bread-making plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  13. A recognition method research based on the heart sound texture map

    Directory of Open Access Journals (Sweden)

    Huizhong Cheng

    2016-06-01

    Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.

  14. Traffic sign recognition based on deep convolutional neural network

    Science.gov (United States)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  15. Arabic sign language recognition based on HOG descriptor

    Science.gov (United States)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  16. Toward a multipoint optical fibre sensor system for use in process water systems based on artificial neural network pattern recognition

    International Nuclear Information System (INIS)

    King, D; Lyons, W B; Flanagan, C; Lewis, E

    2005-01-01

    An optical fibre sensor capable of detecting various concentrations of ethanol in water supplies is reported. The sensor is based on a U-bend sensor configuration and is incorporated into a 170-metre length of silica cladding silica core optical fibre. The sensor is interrogated using Optical Time Domain Reflectometry (OTDR) and it is proposed to apply artificial neural network (ANN) pattern recognition techniques to the resulting OTDR signals to accurately classify the sensor test conditions. It is also proposed that additional U-bend configuration sensors will be added to the fibre measurement length, in order to implement a multipoint optical fibre sensor system

  17. A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies.

    Science.gov (United States)

    Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca

    2017-04-15

    Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.

  18. An automatic system for Turkish word recognition using Discrete Wavelet Neural Network based on adaptive entropy

    International Nuclear Information System (INIS)

    Avci, E.

    2007-01-01

    In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of two layers: discrete wavelet layer and multi-layer perceptron. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of Discrete Wavelet Transform (DWT) and wavelet entropy. The multi-layer perceptron used for classification is a feed-forward neural network. The performance of the used system is evaluated by using noisy Turkish word signals. Test results showing the effectiveness of the proposed automatic system are presented in this paper. The rate of correct recognition is about 92.5% for the sample speech signals. (author)

  19. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  20. Pattern-recognition system application to EBR-II plant-life extension

    International Nuclear Information System (INIS)

    King, R.W.; Radtke, W.H.; Mott, J.E.

    1988-01-01

    A computer-based pattern-recognition system, the System State Analyzer (SSA), is being used as part of the EBR-II plant-life extension program for detection of degradation and other abnormalities in plant systems. The SSA is used for surveillance of the EBR-II primary system instrumentation, primary sodium pumps, and plant heat balances. Early results of this surveillance indicate that the SSA can detect instrumentation degradation and system performance degradation over varying time intervals, and can provide derived signal values to replace signals from failed critical sensors. These results are being used in planning for extended-life operation of EBR-II

  1. Constraints in distortion-invariant target recognition system simulation

    Science.gov (United States)

    Iftekharuddin, Khan M.; Razzaque, Md A.

    2000-11-01

    Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.

  2. Fault Diagnosis of Car Engine by Using a Novel GA-Based Extension Recognition Method

    Directory of Open Access Journals (Sweden)

    Meng-Hui Wang

    2014-01-01

    Full Text Available Due to the passenger’s security, the recognized hidden faults in car engines are the most important work for a maintenance engineer, so they can regulate the engines to be safe and improve the reliability of automobile systems. In this paper, we will present a novel fault recognition method based on the genetic algorithm (GA and the extension theory and also apply this method to the fault recognition of a practical car engine. The proposed recognition method has been tested on the Nissan Cefiro 2.0 engine and has also been compared to other traditional classification methods. Experimental results are of great effect regarding the hidden fault recognition of car engines, and the proposed method can also be applied to other industrial apparatus.

  3. Wearable-Based Human Activity Recognition Using an IoT Approach

    Directory of Open Access Journals (Sweden)

    Diego Castro

    2017-11-01

    Full Text Available This paper presents a novel system based on the Internet of Things (IoT to Human Activity Recognition (HAR by monitoring vital signs remotely. We use machine learning algorithms to determine the activity done within four pre-established categories (lie, sit, walk and jog. Meanwhile, it is able to give feedback during and after the activity is performed, using a remote monitoring component with remote visualization and programmable alarms. This system was successfully implemented with a 95.83% success ratio.

  4. The Army word recognition system

    Science.gov (United States)

    Hadden, David R.; Haratz, David

    1977-01-01

    The application of speech recognition technology in the Army command and control area is presented. The problems associated with this program are described as well as as its relevance in terms of the man/machine interactions, voice inflexions, and the amount of training needed to interact with and utilize the automated system.

  5. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors.

    Science.gov (United States)

    Cippitelli, Enea; Gasparrini, Samuele; Gambi, Ennio; Spinsante, Susanna

    2016-01-01

    The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.

  6. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors

    Directory of Open Access Journals (Sweden)

    Enea Cippitelli

    2016-01-01

    Full Text Available The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.

  7. Increased Efficiency of Face Recognition System using Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Rajani Muraleedharan

    2006-02-01

    Full Text Available This research was inspired by the need of a flexible and cost effective biometric security system. The flexibility of the wireless sensor network makes it a natural choice for data transmission. Swarm intelligence (SI is used to optimize routing in distributed time varying network. In this paper, SI maintains the required bit error rate (BER for varied channel conditions while consuming minimal energy. A specific biometric, the face recognition system, is discussed as an example. Simulation shows that the wireless sensor network is efficient in energy consumption while keeping the transmission accuracy, and the wireless face recognition system is competitive to the traditional wired face recognition system in classification accuracy.

  8. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the

  9. The use of open and machine vision technologies for development of gesture recognition intelligent systems

    Science.gov (United States)

    Cherkasov, Kirill V.; Gavrilova, Irina V.; Chernova, Elena V.; Dokolin, Andrey S.

    2018-05-01

    The article is devoted to reflection of separate aspects of intellectual system gesture recognition development. The peculiarity of the system is its intellectual block which completely based on open technologies: OpenCV library and Microsoft Cognitive Toolkit (CNTK) platform. The article presents the rationale for the choice of such set of tools, as well as the functional scheme of the system and the hierarchy of its modules. Experiments have shown that the system correctly recognizes about 85% of images received from sensors. The authors assume that the improvement of the algorithmic block of the system will increase the accuracy of gesture recognition up to 95%.

  10. Programmable molecular recognition based on the geometry of DNA nanostructures.

    Science.gov (United States)

    Woo, Sungwook; Rothemund, Paul W K

    2011-07-10

    From ligand-receptor binding to DNA hybridization, molecular recognition plays a central role in biology. Over the past several decades, chemists have successfully reproduced the exquisite specificity of biomolecular interactions. However, engineering multiple specific interactions in synthetic systems remains difficult. DNA retains its position as the best medium with which to create orthogonal, isoenergetic interactions, based on the complementarity of Watson-Crick binding. Here we show that DNA can be used to create diverse bonds using an entirely different principle: the geometric arrangement of blunt-end stacking interactions. We show that both binary codes and shape complementarity can serve as a basis for such stacking bonds, and explore their specificity, thermodynamics and binding rules. Orthogonal stacking bonds were used to connect five distinct DNA origami. This work, which demonstrates how a single attractive interaction can be developed to create diverse bonds, may guide strategies for molecular recognition in systems beyond DNA nanostructures.

  11. Gait Recognition Based on Outermost Contour

    Directory of Open Access Journals (Sweden)

    Lili Liu

    2011-10-01

    Full Text Available Gait recognition aims to identify people by the way they walk. In this paper, a simple but e ective gait recognition method based on Outermost Contour is proposed. For each gait image sequence, an adaptive silhouette extraction algorithm is firstly used to segment the frames of the sequence and a series of postprocessing is applied to obtain the normalized silhouette images with less noise. Then a novel feature extraction method based on Outermost Contour is performed. Principal Component Analysis (PCA is adopted to reduce the dimensionality of the distance signals derived from the Outermost Contours of silhouette images. Then Multiple Discriminant Analysis (MDA is used to optimize the separability of gait features belonging to di erent classes. Nearest Neighbor (NN classifier and Nearest Neighbor classifier with respect to class Exemplars (ENN are used to classify the final feature vectors produced by MDA. In order to verify the e ectiveness and robustness of our feature extraction algorithm, we also use two other classifiers: Backpropagation Neural Network (BPNN and Support Vector Machine (SVM for recognition. Experimental results on a gait database of 100 people show that the accuracy of using MDA, BPNN and SVM can achieve 97.67%, 94.33% and 94.67%, respectively.

  12. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  13. Depth-based human activity recognition: A comparative perspective study on feature extraction

    Directory of Open Access Journals (Sweden)

    Heba Hamdy Ali

    2018-06-01

    Full Text Available Depth Maps-based Human Activity Recognition is the process of categorizing depth sequences with a particular activity. In this problem, some applications represent robust solutions in domains such as surveillance system, computer vision applications, and video retrieval systems. The task is challenging due to variations inside one class and distinguishes between activities of various classes and video recording settings. In this study, we introduce a detailed study of current advances in the depth maps-based image representations and feature extraction process. Moreover, we discuss the state of art datasets and subsequent classification procedure. Also, a comparative study of some of the more popular depth-map approaches has provided in greater detail. The proposed methods are evaluated on three depth-based datasets “MSR Action 3D”, “MSR Hand Gesture”, and “MSR Daily Activity 3D”. Experimental results achieved 100%, 95.83%, and 96.55% respectively. While combining depth and color features on “RGBD-HuDaAct” Dataset, achieved 89.1%. Keywords: Activity recognition, Depth, Feature extraction, Video, Human body detection, Hand gesture

  14. Deep Classifiers-Based License Plate Detection, Localization and Recognition on GPU-Powered Mobile Platform

    Directory of Open Access Journals (Sweden)

    Syed Tahir Hussain Rizvi

    2017-10-01

    Full Text Available The realization of a deep neural architecture on a mobile platform is challenging, but can open up a number of possibilities for visual analysis applications. A neural network can be realized on a mobile platform by exploiting the computational power of the embedded GPU and simplifying the flow of a neural architecture trained on the desktop workstation or a GPU server. This paper presents an embedded platform-based Italian license plate detection and recognition system using deep neural classifiers. In this work, trained parameters of a highly precise automatic license plate recognition (ALPR system are imported and used to replicate the same neural classifiers on a Nvidia Shield K1 tablet. A CUDA-based framework is used to realize these neural networks. The flow of the trained architecture is simplified to perform the license plate recognition in real-time. Results show that the tasks of plate and character detection and localization can be performed in real-time on a mobile platform by simplifying the flow of the trained architecture. However, the accuracy of the simplified architecture would be decreased accordingly.

  15. DCT-based iris recognition.

    Science.gov (United States)

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  16. Robust Face Recognition Based on Texture Analysis

    Directory of Open Access Journals (Sweden)

    Sanun Srisuk

    2013-01-01

    Full Text Available In this paper, we present a new framework for face recognition with varying illumination based on DCT total variation minimization (DTV, a Gabor filter, a sub-micro-pattern analysis (SMP and discriminated accumulative feature transform (DAFT. We first suppress the illumination effect by using the DCT with the help of TV as a tool for face normalization. The DTV image is then emphasized by the Gabor filter. The facial features are encoded by our proposed method - the SMP. The SMP image is then transformed to the 2D histogram using DAFT. Our system is verified with experiments on the AR and the Yale face database B.

  17. Chemical entity recognition in patents by combining dictionary-based and statistical approaches

    Science.gov (United States)

    Akhondi, Saber A.; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F.H.; Hettne, Kristina M.; van Mulligen, Erik M.; Kors, Jan A.

    2016-01-01

    We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small. Database URL: http://biosemantics.org/chemdner-patents PMID:27141091

  18. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  19. Optimization Methods in Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    L. Povoda

    2016-09-01

    Full Text Available Emotions play big role in our everyday communication and contain important information. This work describes a novel method of automatic emotion recognition from textual data. The method is based on well-known data mining techniques, novel approach based on parallel run of SVM (Support Vector Machine classifiers, text preprocessing and 3 optimization methods: sequential elimination of attributes, parameter optimization based on token groups, and method of extending train data sets during practical testing and production release final tuning. We outperformed current state of the art methods and the results were validated on bigger data sets (3346 manually labelled samples which is less prone to overfitting when compared to related works. The accuracy achieved in this work is 86.89% for recognition of 5 emotional classes. The experiments were performed in the real world helpdesk environment, was processing Czech language but the proposed methodology is general and can be applied to many different languages.

  20. The Relative Success of Recognition-Based Inference in Multichoice Decisions

    Science.gov (United States)

    McCloy, Rachel; Beaman, C. Philip; Smith, Philip T.

    2008-01-01

    The utility of an "ecologically rational" recognition-based decision rule in multichoice decision problems is analyzed, varying the type of judgment required (greater or lesser). The maximum size and range of a counterintuitive advantage associated with recognition-based judgment (the "less-is-more effect") is identified for a range of cue…

  1. Object Recognition System-on-Chip Using the Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Houzet Dominique

    2005-01-01

    Full Text Available The first aim of this work is to propose the design of a system-on-chip (SoC platform dedicated to digital image and signal processing, which is tuned to implement efficiently multiply-and-accumulate (MAC vector/matrix operations. The second aim of this work is to implement a recent promising neural network method, namely, the support vector machine (SVM used for real-time object recognition, in order to build a vision machine. With such a reconfigurable and programmable SoC platform, it is possible to implement any SVM function dedicated to any object recognition problem. The final aim is to obtain an automatic reconfiguration of the SoC platform, based on the results of the learning phase on an objects' database, which makes it possible to recognize practically any object without manual programming. Recognition can be of any kind that is from image to signal data. Such a system is a general-purpose automatic classifier. Many applications can be considered as a classification problem, but are usually treated specifically in order to optimize the cost of the implemented solution. The cost of our approach is more important than a dedicated one, but in a near future, hundreds of millions of gates will be common and affordable compared to the design cost. What we are proposing here is a general-purpose classification neural network implemented on a reconfigurable SoC platform. The first version presented here is limited in size and thus in object recognition performances, but can be easily upgraded according to technology improvements.

  2. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  3. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

    NARCIS (Netherlands)

    Marti, U-V.; Bunke, H.

    2004-01-01

    In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

  4. Shape-based hand recognition approach using the morphological pattern spectrum

    Science.gov (United States)

    Ramirez-Cortes, Juan Manuel; Gomez-Gil, Pilar; Sanchez-Perez, Gabriel; Prieto-Castro, Cesar

    2009-01-01

    We propose the use of the morphological pattern spectrum, or pecstrum, as the base of a biometric shape-based hand recognition system. The system receives an image of the right hand of a subject in an unconstrained pose, which is captured with a commercial flatbed scanner. According to pecstrum property of invariance to translation and rotation, the system does not require the use of pegs for a fixed hand position, which simplifies the image acquisition process. This novel feature-extraction method is tested using a Euclidean distance classifier for identification and verification cases, obtaining 97% correct identification, and an equal error rate (EER) of 0.0285 (2.85%) for the verification mode. The obtained results indicate that the pattern spectrum represents a good feature-extraction alternative for low- and medium-level hand-shape-based biometric applications.

  5. sEMG-Based Gesture Recognition with Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhen Ding

    2018-06-01

    Full Text Available The traditional classification methods for limb motion recognition based on sEMG have been deeply researched and shown promising results. However, information loss during feature extraction reduces the recognition accuracy. To obtain higher accuracy, the deep learning method was introduced. In this paper, we propose a parallel multiple-scale convolution architecture. Compared with the state-of-art methods, the proposed architecture fully considers the characteristics of the sEMG signal. Larger sizes of kernel filter than commonly used in other CNN-based hand recognition methods are adopted. Meanwhile, the characteristics of the sEMG signal, that is, muscle independence, is considered when designing the architecture. All the classification methods were evaluated on the NinaPro database. The results show that the proposed architecture has the highest recognition accuracy. Furthermore, the results indicate that parallel multiple-scale convolution architecture with larger size of kernel filter and considering muscle independence can significantly increase the classification accuracy.

  6. PALESTINE AUTOMOTIVE LICENSE IDENTITY RECOGNITION FOR INTELLIGENT PARKING SYSTEM

    OpenAIRE

    ANEES ABU SNEINEH; WAEL A. SALAH

    2017-01-01

    Providing employees with protection and security is one of the key concerns of any organization. This goal can be implemented mainly by managing and protecting employees’ cars in the parking area. Therefore, a parking area must be managed and organized with smart technologies and tools that can be applied and integrated in an intelligent parking system. This paper presents the tools based on image recognition technology that can be used to effectively control various parts of a parking sys...

  7. Invariant Face recognition Using Infrared Images

    International Nuclear Information System (INIS)

    Zahran, E.G.

    2012-01-01

    thermal face images enhance the performance of face recognition systems. The thesis also presents an application of cepstral analysis for face recognition. A cpestrum-based face recognition system is introduced and tested for various types of degradation

  8. Finger Vein Recognition Based on Local Directional Code

    Science.gov (United States)

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-01-01

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194

  9. Finger Vein Recognition Based on Local Directional Code

    Directory of Open Access Journals (Sweden)

    Rongyang Xiao

    2012-11-01

    Full Text Available Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP, Local Derivative Pattern (LDP and Local Line Binary Pattern (LLBP. However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD, this paper represents a new direction based local descriptor called Local Directional Code (LDC and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.

  10. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  11. Theoretical Aspects of the Patterns Recognition Statistical Theory Used for Developing the Diagnosis Algorithms for Complicated Technical Systems

    Science.gov (United States)

    Obozov, A. A.; Serpik, I. N.; Mihalchenko, G. S.; Fedyaeva, G. A.

    2017-01-01

    In the article, the problem of application of the pattern recognition (a relatively young area of engineering cybernetics) for analysis of complicated technical systems is examined. It is shown that the application of a statistical approach for hard distinguishable situations could be the most effective. The different recognition algorithms are based on Bayes approach, which estimates posteriori probabilities of a certain event and an assumed error. Application of the statistical approach to pattern recognition is possible for solving the problem of technical diagnosis complicated systems and particularly big powered marine diesel engines.

  12. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Xinzheng

    2017-10-01

    Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.

  13. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  14. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    Science.gov (United States)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  15. Human Skeleton Model Based Dynamic Features for Walking Speed Invariant Gait Recognition

    Directory of Open Access Journals (Sweden)

    Jure Kovač

    2014-01-01

    Full Text Available Humans are able to recognize small number of people they know well by the way they walk. This ability represents basic motivation for using human gait as the means for biometric identification. Such biometrics can be captured at public places from a distance without subject's collaboration, awareness, and even consent. Although current approaches give encouraging results, we are still far from effective use in real-life applications. In general, methods set various constraints to circumvent the influence of covariate factors like changes of walking speed, view, clothing, footwear, and object carrying, that have negative impact on recognition performance. In this paper we propose a skeleton model based gait recognition system focusing on modelling gait dynamics and eliminating the influence of subjects appearance on recognition. Furthermore, we tackle the problem of walking speed variation and propose space transformation and feature fusion that mitigates its influence on recognition performance. With the evaluation on OU-ISIR gait dataset, we demonstrate state of the art performance of proposed methods.

  16. Face recognition based on depth maps and surface curvature

    Science.gov (United States)

    Gordon, Gaile G.

    1991-09-01

    This paper explores the representation of the human face by features based on the curvature of the face surface. Curature captures many features necessary to accurately describe the face, such as the shape of the forehead, jawline, and cheeks, which are not easily detected from standard intensity images. Moreover, the value of curvature at a point on the surface is also viewpoint invariant. Until recently range data of high enough resolution and accuracy to perform useful curvature calculations on the scale of the human face had been unavailable. Although several researchers have worked on the problem of interpreting range data from curved (although usually highly geometrically structured) surfaces, the main approaches have centered on segmentation by signs of mean and Gaussian curvature which have not proved sufficient in themselves for the case of the human face. This paper details the calculation of principal curvature for a particular data set, the calculation of general surface descriptors based on curvature, and the calculation of face specific descriptors based both on curvature features and a priori knowledge about the structure of the face. These face specific descriptors can be incorporated into many different recognition strategies. A system that implements one such strategy, depth template comparison, giving recognition rates between 80% and 90% is described.

  17. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.

  18. Authentication: From Passwords to Biometrics: An implementation of a speaker recognition system on Android

    OpenAIRE

    Heimark, Erlend

    2012-01-01

    We implement a biometric authentication system on the Android platform, which is based on text-dependent speaker recognition. The Android version used in the application is Android 4.0. The application makes use of the Modular Audio Recognition Framework, from which many of the algorithms are adapted in the processes of preprocessing and feature extraction. In addition, we employ the Dynamic Time Warping (DTW) algorithm for the comparison of different voice features. A training procedure is i...

  19. An aptamer-based fluorescence bio-sensor for chiral recognition of arginine enantiomers.

    Science.gov (United States)

    Yuan, Haiyan; Huang, Yunmei; Yang, Jidong; Guo, Yuan; Zeng, Xiaoqing; Zhou, Shang; Cheng, Jiawei; Zhang, Yuhui

    2018-07-05

    In this study, a novel aptamer - based fluorescence bio-sensor (aptamer-AuNps) was developed for chiral recognition of arginine (Arg) enantiomers based on aptamer and gold nanoparticles (AuNps). Carboxyfluorescein (FAM) labeled aptamers (Apt) were absorbed on AuNps and their fluorescence intensity could be significantly quenched by AuNps based on fluorescence resonance energy transfer (FRET). Once d-Arg or l-Arg were added into the above solution, the aptamer specifically bind to Arg enantiomers and released from AuNps, so the fluorescence intensity of d-Arg system and l-Arg system were all enhanced. The affinity of Apt to l-Arg is tighter to d-Arg, so the enhanced fluorescence signals of l-Arg system was stronger than d-Arg system. What's more, the enhanced fluorescence were directly proportional to the concentration of d-Arg and l-Arg ranging from 0-300 nM and 0-400 nM with related coefficients of 0.9939 and 0.9952, respectively. Furthermore, the method was successfully applied to detection l-Arg in human urine samples with satisfactory results. Eventually, a simple "OR" logic gate with d-Arg &l-Arg as inputs and AuNps aggregation state as outputs was fabricated, which can help us understand the chiral recognition process deeply. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Emotion Recognition of Speech Signals Based on Filter Methods

    Directory of Open Access Journals (Sweden)

    Narjes Yazdanian

    2016-10-01

    Full Text Available Speech is the basic mean of communication among human beings.With the increase of transaction between human and machine, necessity of automatic dialogue and removing human factor has been considered. The aim of this study was to determine a set of affective features the speech signal is based on emotions. In this study system was designs that include three mains sections, features extraction, features selection and classification. After extraction of useful features such as, mel frequency cepstral coefficient (MFCC, linear prediction cepstral coefficients (LPC, perceptive linear prediction coefficients (PLP, ferment frequency, zero crossing rate, cepstral coefficients and pitch frequency, Mean, Jitter, Shimmer, Energy, Minimum, Maximum, Amplitude, Standard Deviation, at a later stage with filter methods such as Pearson Correlation Coefficient, t-test, relief and information gain, we came up with a method to rank and select effective features in emotion recognition. Then Result, are given to the classification system as a subset of input. In this classification stage, multi support vector machine are used to classify seven type of emotion. According to the results, that method of relief, together with multi support vector machine, has the most classification accuracy with emotion recognition rate of 93.94%.

  1. Implementation of a Tour Guide Robot System Using RFID Technology and Viterbi Algorithm-Based HMM for Speech Recognition

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2014-01-01

    Full Text Available This paper applied speech recognition and RFID technologies to develop an omni-directional mobile robot into a robot with voice control and guide introduction functions. For speech recognition, the speech signals were captured by short-time processing. The speaker first recorded the isolated words for the robot to create speech database of specific speakers. After the speech pre-processing of this speech database, the feature parameters of cepstrum and delta-cepstrum were obtained using linear predictive coefficient (LPC. Then, the Hidden Markov Model (HMM was used for model training of the speech database, and the Viterbi algorithm was used to find an optimal state sequence as the reference sample for speech recognition. The trained reference model was put into the industrial computer on the robot platform, and the user entered the isolated words to be tested. After processing by the same reference model and comparing with previous reference model, the path of the maximum total probability in various models found using the Viterbi algorithm in the recognition was the recognition result. Finally, the speech recognition and RFID systems were achieved in an actual environment to prove its feasibility and stability, and implemented into the omni-directional mobile robot.

  2. Intensity Variation Normalization for Finger Vein Recognition Using Guided Filter Based Singe Scale Retinex.

    Science.gov (United States)

    Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2015-07-14

    Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.

  3. System of breast cancer recognition

    International Nuclear Information System (INIS)

    Rozhkova, N.I.

    1984-01-01

    The paper is concerned with the resUlts of the multimodality system of breast cancer recognition using methods, of clinical X-ray and cytological examinations. Altogether 1671 women were examined; breast cancer was detected in 165. Stage 1 was detected in 63 patients, Stage 2 in 34, Stage 3 in 34, and Stage 4 in 8. In 7% of the cases, tumors were inpalpable and could be detected by X-ray only. In 9.9% of the cases, the multicentric nature of tumor growth was established. In 71% tumors had a mixed histological structure. The system of breast cancer recognition provided for accurate diagnosis in 98% of the cases making it possible to avoid surgical intervention in 38%. Good diagnostic results are possible under conditions of a special mammology unit where a roentgenologist working in a close contact with surgeonns working in a close contact with surgeos and morphologists, performs the first stages of diagnosis beginning from clinical examination up to special methods that require X-ray control (paracentesis, ductography, pneumocystography, preoperative marking of the breast and marking of the remote sectors of the breast)

  4. Uav Visual Autolocalizaton Based on Automatic Landmark Recognition

    Science.gov (United States)

    Silva Filho, P.; Shiguemori, E. H.; Saotome, O.

    2017-08-01

    Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.

  5. Connected digit speech recognition system for Malayalam language

    Indian Academy of Sciences (India)

    A connected digit speech recognition is important in many applications such as automated banking system, catalogue-dialing, automatic data entry, automated banking system, etc. This paper presents an optimum speaker-independent connected digit recognizer for Malayalam language. The system employs Perceptual ...

  6. Auditory analysis for speech recognition based on physiological models

    Science.gov (United States)

    Jeon, Woojay; Juang, Biing-Hwang

    2004-05-01

    To address the limitations of traditional cepstrum or LPC based front-end processing methods for automatic speech recognition, more elaborate methods based on physiological models of the human auditory system may be used to achieve more robust speech recognition in adverse environments. For this purpose, a modified version of a model of the primary auditory cortex featuring a three dimensional mapping of auditory spectra [Wang and Shamma, IEEE Trans. Speech Audio Process. 3, 382-395 (1995)] is adopted and investigated for its use as an improved front-end processing method. The study is conducted in two ways: first, by relating the model's redundant representation to traditional spectral representations and showing that the former not only encompasses information provided by the latter, but also reveals more relevant information that makes it superior in describing the identifying features of speech signals; and second, by observing the statistical features of the representation for various classes of sound to show how different identifying features manifest themselves as specific patterns on the cortical map, thereby becoming a place-coded data set on which detection theory could be applied to simulate auditory perception and cognition.

  7. Markov Models for Handwriting Recognition

    CERN Document Server

    Plotz, Thomas

    2011-01-01

    Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden

  8. Iris recognition in less constrained environments: a video-based approach

    OpenAIRE

    Mahadeo, Nitin Kumar

    2017-01-01

    This dissertation focuses on iris biometrics. Although the iris is the most accurate biometric, its adoption has been relatively slow. Conventional iris recognition systems utilize still eye images captured in ideal environments and require highly constrained subject presentation. A drop in recognition performance is observed when these constraints are removed as the quality of the data acquired is affected by heterogeneous factors. For iris recognition to be widely adopted, it can therefore ...

  9. Chemical entity recognition in patents by combining dictionary-based and statistical approaches.

    Science.gov (United States)

    Akhondi, Saber A; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F H; Hettne, Kristina M; van Mulligen, Erik M; Kors, Jan A

    2016-01-01

    We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small.Database URL: http://biosemantics.org/chemdner-patents. © The Author(s) 2016. Published by Oxford University Press.

  10. Pipeline Structural Damage Detection Using Self-Sensing Technology and PNN-Based Pattern Recognition

    International Nuclear Information System (INIS)

    Lee, Chang Gil; Park, Woong Ki; Park, Seung Hee

    2011-01-01

    In a structure, damage can occur at several scales from micro-cracking to corrosion or loose bolts. This makes the identification of damage difficult with one mode of sensing. Hence, a multi-mode actuated sensing system is proposed based on a self-sensing circuit using a piezoelectric sensor. In the self sensing-based multi-mode actuated sensing, one mode provides a wide frequency-band structural response from the self-sensed impedance measurement and the other mode provides a specific frequency-induced structural wavelet response from the self-sensed guided wave measurement. In this study, an experimental study on the pipeline system is carried out to verify the effectiveness and the robustness of the proposed structural health monitoring approach. Different types of structural damage are artificially inflicted on the pipeline system. To classify the multiple types of structural damage, a supervised learning-based statistical pattern recognition is implemented by composing a two-dimensional space using the damage indices extracted from the impedance and guided wave features. For more systematic damage classification, several control parameters to determine an optimal decision boundary for the supervised learning-based pattern recognition are optimized. Finally, further research issues will be discussed for real-world implementation of the proposed approach

  11. Human action recognition based on estimated weak poses

    Science.gov (United States)

    Gong, Wenjuan; Gonzàlez, Jordi; Roca, Francesc Xavier

    2012-12-01

    We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.

  12. Finger Vein Recognition Based on a Personalized Best Bit Map

    Science.gov (United States)

    Yang, Gongping; Xi, Xiaoming; Yin, Yilong

    2012-01-01

    Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735

  13. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  14. Recognition Stage for a Speed Supervisor Based on Road Sign Detection

    Directory of Open Access Journals (Sweden)

    José María Armingol

    2012-09-01

    Full Text Available Traffic accidents are still one of the main health problems in the World. A number of measures have been applied in order to reduce the number of injuries and fatalities in roads, i.e., implementation of Advanced Driver Assistance Systems (ADAS based on image processing. In this paper, a real time speed supervisor based on road sign recognition that can work both in urban and non-urban environments is presented. The system is able to recognize 135 road signs, belonging to the danger, yield, prohibition obligation and indication types, and sends warning messages to the driver upon the combination of two pieces of information: the current speed of the car and the road sign symbol. The core of this paper is the comparison between the two main methods which have been traditionally used for detection and recognition of road signs: template matching (TM and neural networks (NN. The advantages and disadvantages of the two approaches will be shown and commented. Additionally we will show how the use of well-known algorithms to avoid illumination issues reduces the amount of images needed to train a neural network.

  15. Dielectric and ferroelectric sensing based on molecular recognition in Cu(1,10-phenlothroline)2SeO4.(diol) systems

    Science.gov (United States)

    Ye, Heng-Yun; Liao, Wei-Qiang; Zhou, Qionghua; Zhang, Yi; Wang, Jinlan; You, Yu-Meng; Wang, Jin-Yun; Chen, Zhong-Ning; Li, Peng-Fei; Fu, Da-Wei; Huang, Songping D.; Xiong, Ren-Gen

    2017-02-01

    The process of molecular recognition is the assembly of two or more molecules through weak interactions. Information in the process of molecular recognition can be transmitted to us via physical signals, which may find applications in sensing and switching. The conventional signals are mainly limited to light signal. Here, we describe the recognition of diols with Cu(1,10-phenlothroline)2SeO4 and the transduction of discrete recognition events into dielectric and/or ferroelectric signals. We observe that systems of Cu(1,10-phenlothroline)2SeO4.(diol) exhibit significant dielectric and/or ferroelectric dependence on different diol molecules. The compounds including ethane-1,2-diol or propane-1,2-diol just show small temperature-dependent dielectric anomalies and no reversible polarization, while the compound including ethane-1,3-diol shows giant temperature-dependent dielectric anomalies as well as ferroelectric reversible spontaneous polarization. This finding shows that dielectricity and/or ferroelectricity has the potential to be used for signalling molecular recognition.

  16. Intelligent fault recognition strategy based on adaptive optimized multiple centers

    Science.gov (United States)

    Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong

    2018-06-01

    For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.

  17. Towards discrete wavelet transform-based human activity recognition

    Science.gov (United States)

    Khare, Manish; Jeon, Moongu

    2017-06-01

    Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods.

  18. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    Science.gov (United States)

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  19. Application of image recognition-based automatic hyphae detection in fungal keratitis.

    Science.gov (United States)

    Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi

    2018-03-01

    The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal

  20. Face Recognition using Artificial Neural Network | Endeshaw | Zede ...

    African Journals Online (AJOL)

    Face recognition (FR) is one of the biometric methods to identify the individuals by the features of face. Two Face Recognition Systems (FRS) based on Artificial Neural Network (ANN) have been proposed in this paper based on feature extraction techniques. In the first system, Principal Component Analysis (PCA) has been ...

  1. A Robust and Fast Computation Touchless Palm Print Recognition System Using LHEAT and the IFkNCN Classifier

    Directory of Open Access Journals (Sweden)

    Haryati Jaafar

    2015-01-01

    Full Text Available Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN, was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%.

  2. Development of Portable Automatic Number Plate Recognition System on Android Mobile Phone

    Science.gov (United States)

    Mutholib, Abdul; Gunawan, Teddy S.; Chebil, Jalel; Kartiwi, Mira

    2013-12-01

    The Automatic Number Plate Recognition (ANPR) System has performed as the main role in various access control and security, such as: tracking of stolen vehicles, traffic violations (speed trap) and parking management system. In this paper, the portable ANPR implemented on android mobile phone is presented. The main challenges in mobile application are including higher coding efficiency, reduced computational complexity, and improved flexibility. Significance efforts are being explored to find suitable and adaptive algorithm for implementation of ANPR on mobile phone. ANPR system for mobile phone need to be optimize due to its limited CPU and memory resources, its ability for geo-tagging image captured using GPS coordinates and its ability to access online database to store the vehicle's information. In this paper, the design of portable ANPR on android mobile phone will be described as follows. First, the graphical user interface (GUI) for capturing image using built-in camera was developed to acquire vehicle plate number in Malaysia. Second, the preprocessing of raw image was done using contrast enhancement. Next, character segmentation using fixed pitch and an optical character recognition (OCR) using neural network were utilized to extract texts and numbers. Both character segmentation and OCR were using Tesseract library from Google Inc. The proposed portable ANPR algorithm was implemented and simulated using Android SDK on a computer. Based on the experimental results, the proposed system can effectively recognize the license plate number at 90.86%. The required processing time to recognize a license plate is only 2 seconds on average. The result is consider good in comparison with the results obtained from previous system that was processed in a desktop PC with the range of result from 91.59% to 98% recognition rate and 0.284 second to 1.5 seconds recognition time.

  3. A sensor and video based ontology for activity recognition in smart environments.

    Science.gov (United States)

    Mitchell, D; Morrow, Philip J; Nugent, Chris D

    2014-01-01

    Activity recognition is used in a wide range of applications including healthcare and security. In a smart environment activity recognition can be used to monitor and support the activities of a user. There have been a range of methods used in activity recognition including sensor-based approaches, vision-based approaches and ontological approaches. This paper presents a novel approach to activity recognition in a smart home environment which combines sensor and video data through an ontological framework. The ontology describes the relationships and interactions between activities, the user, objects, sensors and video data.

  4. An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM

    Science.gov (United States)

    Wang, Juan

    2018-03-01

    The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.

  5. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  6. Optical Character Recognition.

    Science.gov (United States)

    Converso, L.; Hocek, S.

    1990-01-01

    This paper describes computer-based optical character recognition (OCR) systems, focusing on their components (the computer, the scanner, the OCR, and the output device); how the systems work; and features to consider in selecting a system. A list of 26 questions to ask to evaluate systems for potential purchase is included. (JDD)

  7. Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment

    Directory of Open Access Journals (Sweden)

    Muhammad Arsalan

    2017-11-01

    Full Text Available Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN; which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II training database (selected from the UBIRIS.v2 database and mobile iris challenge evaluation (MICHE dataset were used. Experimental results showed that our method outperformed the existing segmentation methods.

  8. WORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION

    NARCIS (Netherlands)

    Chen, W.; Gader, P.

    2004-01-01

    Word level training refers to the process of learning the parameters of a word recognition system based on word level criteria functions. Previously, researchers trained lexicon­driven handwritten word recognition systems at the character level individually. These systems generally use statistical

  9. Hardware/Software Co-Design of a Traffic Sign Recognition System Using Zynq FPGAs

    Directory of Open Access Journals (Sweden)

    Yan Han

    2015-12-01

    Full Text Available Traffic sign recognition (TSR, taken as an important component of an intelligent vehicle system, has been an emerging research topic in recent years. In this paper, a traffic sign detection system based on color segmentation, speeded-up robust features (SURF detection and the k-nearest neighbor classifier is introduced. The proposed system benefits from the SURF detection algorithm, which achieves invariance to rotated, skewed and occluded signs. In addition to the accuracy and robustness issues, a TSR system should target a real-time implementation on an embedded system. Therefore, a hardware/software co-design architecture for a Zynq-7000 FPGA is presented as a major objective of this work. The sign detection operations are accelerated by programmable hardware logic that searches the potential candidates for sign classification. Sign recognition and classification uses a feature extraction and matching algorithm, which is implemented as a software component that runs on the embedded ARM CPU.

  10. Secondary iris recognition method based on local energy-orientation feature

    Science.gov (United States)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing

    2015-01-01

    This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.

  11. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    International Nuclear Information System (INIS)

    Lee, Inho; Oh, Jaesung; Oh, Jun-Ho; Kim, Inhyeok

    2017-01-01

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  12. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inho [Institute for Human and Machine Cognition (IHMC), Florida (United States); Oh, Jaesung; Oh, Jun-Ho [Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Inhyeok [NAVER Green Factory, Seongnam (Korea, Republic of)

    2017-06-15

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  13. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks.

    Science.gov (United States)

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-07-05

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods.

  14. Intensity Variation Normalization for Finger Vein Recognition Using Guided Filter Based Singe Scale Retinex

    Directory of Open Access Journals (Sweden)

    Shan Juan Xie

    2015-07-01

    Full Text Available Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc. vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs. In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV normalization method using guided filter based single scale retinex (GFSSR is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.

  15. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  16. UAV VISUAL AUTOLOCALIZATON BASED ON AUTOMATIC LANDMARK RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. Silva Filho

    2017-08-01

    Full Text Available Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.

  17. Compact holographic optical neural network system for real-time pattern recognition

    Science.gov (United States)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  18. Adaptive Self-Occlusion Behavior Recognition Based on pLSA

    Directory of Open Access Journals (Sweden)

    Hong-bin Tu

    2013-01-01

    Full Text Available Human action recognition is an important area of human action recognition research. Focusing on the problem of self-occlusion in the field of human action recognition, a new adaptive occlusion state behavior recognition approach was presented based on Markov random field and probabilistic Latent Semantic Analysis (pLSA. Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms an occlusion state variable by phase space obtained. Then, we proposed a hierarchical area variety model. Finally, we use the topic model of pLSA to recognize the human behavior. Experiments were performed on the KTH, Weizmann, and Humaneva dataset to test and evaluate the proposed method. The compared experiment results showed that what the proposed method can achieve was more effective than the compared methods.

  19. Support vector machine for automatic pain recognition

    Science.gov (United States)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  20. Recognition of online handwritten Gurmukhi characters based on ...

    Indian Academy of Sciences (India)

    Karun Verma

    as the recognition of characters using rule based post-pro- cessing algorithm. ... ods in their work in order to recognize handwriting with pen-based devices. ..... Centernew is the average y-coordinate value of new stroke and denotes the center ...

  1. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1998-01-01

    .... (4) Invariants: both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  2. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1996-01-01

    .... (4) Invariants -- both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  3. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    Science.gov (United States)

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  4. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-01-01

    Full Text Available In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region’s weights and then weighted different subregions’ matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1, demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  5. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    Science.gov (United States)

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  6. A Cooking Recipe Recommendation System with Visual Recognition of Food Ingredients

    Directory of Open Access Journals (Sweden)

    Keiji Yanai

    2014-04-01

    Full Text Available In this paper, we propose a cooking recipe recommendation system which runs on a consumer smartphone as an interactive mobile application. The proposed system employs real-time visual object recognition of food ingredients, and recommends cooking recipes related to the recognized food ingredients. Because of visual recognition, by only pointing a built-in camera on a smartphone to food ingredients, a user can get to know a related cooking recipes instantly. The objective of the proposed system is to assist people who cook to decide a cooking recipe at grocery stores or at a kitchen. In the current implementation, the system can recognize 30 kinds of food ingredient in 0.15 seconds, and it has achieved the 83.93% recognition rate within the top six candidates. By the user study, we confirmed the effectiveness of the proposed system.

  7. A Support System for the Electric Appliance Control Using Pose Recognition

    Science.gov (United States)

    Kawano, Takuya; Yamamoto, Kazuhiko; Kato, Kunihito; Hongo, Hitoshi

    In this paper, we propose an electric appliance control support system for aged and bedridden people using pose recognition. We proposed a pose recognition system that distinguishes between seven poses of the user on the bed. First, the face and arm regions of the user are detected by using the skin color. Our system focuses a recognition region surrounding the face region. Next, the higher order local autocorrelation features within the region are extracted. The linear discriminant analysis creates the coefficient matrix that can optimally distinguish among training data from the seven poses. Our algorithm can recognize the seven poses even if the subject wears different clothes and slightly shifts or slants on the bed. From the experimental results, our system achieved an accuracy rate of over 99 %. Then, we show that it possibles to construct one of a user-friendly system.

  8. Modular Neural Networks and Type-2 Fuzzy Systems for Pattern Recognition

    CERN Document Server

    Melin, Patricia

    2012-01-01

    This book describes hybrid intelligent systems using type-2 fuzzy logic and modular neural networks for pattern recognition applications. Hybrid intelligent systems combine several intelligent computing paradigms, including fuzzy logic, neural networks, and bio-inspired optimization algorithms, which can be used to produce powerful pattern recognition systems. Type-2 fuzzy logic is an extension of traditional type-1 fuzzy logic that enables managing higher levels of uncertainty in complex real world problems, which are of particular importance in the area of pattern recognition. The book is organized in three main parts, each containing a group of chapters built around a similar subject. The first part consists of chapters with the main theme of theory and design algorithms, which are basically chapters that propose new models and concepts, which are the basis for achieving intelligent pattern recognition. The second part contains chapters with the main theme of using type-2 fuzzy models and modular neural ne...

  9. Dynamic facial expression recognition based on geometric and texture features

    Science.gov (United States)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  10. Chemical Entity Recognition and Resolution to ChEBI

    Science.gov (United States)

    Grego, Tiago; Pesquita, Catia; Bastos, Hugo P.; Couto, Francisco M.

    2012-01-01

    Chemical entities are ubiquitous through the biomedical literature and the development of text-mining systems that can efficiently identify those entities are required. Due to the lack of available corpora and data resources, the community has focused its efforts in the development of gene and protein named entity recognition systems, but with the release of ChEBI and the availability of an annotated corpus, this task can be addressed. We developed a machine-learning-based method for chemical entity recognition and a lexical-similarity-based method for chemical entity resolution and compared them with Whatizit, a popular-dictionary-based method. Our methods outperformed the dictionary-based method in all tasks, yielding an improvement in F-measure of 20% for the entity recognition task, 2–5% for the entity-resolution task, and 15% for combined entity recognition and resolution tasks. PMID:25937941

  11. Industrial robots with sensors and object recognition systems

    International Nuclear Information System (INIS)

    Koehler, G.W.

    1978-01-01

    The previous development and the present status of industrial robots equipped with sensors and object recognition systems are described. This type of equipment allows flexible automation of many work stations in which industrial robots of the first generation, which are unable to react to changes in their respective environments automatically, apart from their being linked to other machines, could not be used because of the prevailing boundary conditions. A classification system facilitates an overview of the large number of technical solutions now available. The manifold possibilities of application of this equipment are demonstrated by a number of examples. As a result of the present state of development of the components required, and in view also of economic reasons, there is a trend towards special designs for a small number of specific purposes and towards stripped-down object recognition. systems with limited applications. A fitting description is offered of the term 'robot', which is now being used in various contexts, and an indication is made of the capabilities and components a machine to be called robot should have as a minimum. Finally, reference is made to some potential lines of development serving to reduce expediture and accelerate recognition processes. (orig.) [de

  12. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  13. Molecular Recognition: Detection of Colorless Compounds Based on Color Change

    Science.gov (United States)

    Khalafi, Lida; Kashani, Samira; Karimi, Javad

    2016-01-01

    A laboratory experiment is described in which students measure the amount of cetirizine in allergy-treatment tablets based on molecular recognition. The basis of recognition is competition of cetirizine with phenolphthalein to form an inclusion complex with ß-cyclodextrin. Phenolphthalein is pinkish under basic condition, whereas it's complex form…

  14. Efficient CEPSTRAL Normalization for Robust Speech Recognition

    National Research Council Canada - National Science Library

    Liu, Fu-Hua; Stern, Richard M; Huang, Xuedong; Acero, Alejandro

    1993-01-01

    In this paper we describe and compare the performance of a series of cepstrum-based procedures that enable the CMU SPHINX-II speech recognition system to maintain a high level of recognition accuracy...

  15. Human Gait Recognition Based on Multiview Gait Sequences

    Directory of Open Access Journals (Sweden)

    Xiaxi Huang

    2008-05-01

    Full Text Available Most of the existing gait recognition methods rely on a single view, usually the side view, of the walking person. This paper investigates the case in which several views are available for gait recognition. It is shown that each view has unequal discrimination power and, therefore, should have unequal contribution in the recognition process. In order to exploit the availability of multiple views, several methods for the combination of the results that are obtained from the individual views are tested and evaluated. A novel approach for the combination of the results from several views is also proposed based on the relative importance of each view. The proposed approach generates superior results, compared to those obtained by using individual views or by using multiple views that are combined using other combination methods.

  16. Infrared face recognition based on LBP histogram and KW feature selection

    Science.gov (United States)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  17. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  18. Automatic system for localization and recognition of vehicle plate numbers

    OpenAIRE

    Vázquez, N.; Nakano, M.; Pérez-Meana, H.

    2003-01-01

    This paper proposes a vehicle numbers plate identification system, which extracts the characters features of a plate from a captured image by a digital camera. Then identify the symbols of the number plate using a multilayer neural network. The proposed recognition system consists of two processes: The training process and the recognition process. During the training process, a database is created using 310 vehicular plate images. Then using this database a multilayer neural network is traine...

  19. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  20. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  1. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    Directory of Open Access Journals (Sweden)

    Guangwei Gao

    Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  2. Iris double recognition based on modified evolutionary neural network

    Science.gov (United States)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  3. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    Science.gov (United States)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  4. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery

    Directory of Open Access Journals (Sweden)

    Shu Tian

    2015-01-01

    Full Text Available The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a VidEo-Based Intelligent Recognitionand Decision (VEBIRD system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VEBIRD comprises a robust eye (iris detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VEBIRD’s effectiveness.

  5. ROBUSTNESS OF A FACE-RECOGNITION TECHNIQUE BASED ON SUPPORT VECTOR MACHINES

    OpenAIRE

    Prashanth Harshangi; Koshy George

    2010-01-01

    The ever-increasing requirements of security concerns have placed a greater demand for face recognition surveillance systems. However, most current face recognition techniques are not quite robust with respect to factors such as variable illumination, facial expression and detail, and noise in images. In this paper, we demonstrate that face recognition using support vector machines are sufficiently robust to different kinds of noise, does not require image pre-processing, and can be used with...

  6. Enhanced iris recognition method based on multi-unit iris images

    Science.gov (United States)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the

  7. Cough Recognition Based on Mel Frequency Cepstral Coefficients and Dynamic Time Warping

    Science.gov (United States)

    Zhu, Chunmei; Liu, Baojun; Li, Ping

    Cough recognition provides important clinical information for the treatment of many respiratory diseases, but the assessment of cough frequency over a long period of time remains unsatisfied for either clinical or research purpose. In this paper, according to the advantage of dynamic time warping (DTW) and the characteristic of cough recognition, an attempt is made to adapt DTW as the recognition algorithm for cough recognition. The process of cough recognition based on mel frequency cepstral coefficients (MFCC) and DTW is introduced. Experiment results of testing samples from 3 subjects show that acceptable performances of cough recognition are obtained by DTW with a small training set.

  8. Radar Target Recognition Based on Stacked Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Zhao Feixiang

    2017-04-01

    Full Text Available Feature extraction is a key step in radar target recognition. The quality of the extracted features determines the performance of target recognition. However, obtaining the deep nature of the data is difficult using the traditional method. The autoencoder can learn features by making use of data and can obtain feature expressions at different levels of data. To eliminate the influence of noise, the method of radar target recognition based on stacked denoising sparse autoencoder is proposed in this paper. This method can extract features directly and efficiently by setting different hidden layers and numbers of iterations. Experimental results show that the proposed method is superior to the K-nearest neighbor method and the traditional stacked autoencoder.

  9. Modular Adaptive System Based on a Multi-Stage Neural Structure for Recognition of 2D Objects of Discontinuous Production

    Directory of Open Access Journals (Sweden)

    I. Topalova

    2005-03-01

    Full Text Available This is a presentation of a new system for invariant recognition of 2D objects with overlapping classes, that can not be effectively recognized with the traditional methods. The translation, scale and partial rotation invariant contour object description is transformed in a DCT spectrum space. The obtained frequency spectrums are decomposed into frequency bands in order to feed different BPG neural nets (NNs. The NNs are structured in three stages - filtering and full rotation invariance; partial recognition; general classification. The designed multi-stage BPG Neural Structure shows very good accuracy and flexibility when tested with 2D objects used in the discontinuous production. The reached speed and the opportunuty for an easy restructuring and reprogramming of the system makes it suitable for application in different applied systems for real time work.

  10. Modular Adaptive System Based on a Multi-Stage Neural Structure for Recognition of 2D Objects of Discontinuous Production

    Directory of Open Access Journals (Sweden)

    I. Topalova

    2008-11-01

    Full Text Available This is a presentation of a new system for invariant recognition of 2D objects with overlapping classes, that can not be effectively recognized with the traditional methods. The translation, scale and partial rotation invariant contour object description is transformed in a DCT spectrum space. The obtained frequency spectrums are decomposed into frequency bands in order to feed different BPG neural nets (NNs. The NNs are structured in three stages - filtering and full rotation invariance; partial recognition; general classification. The designed multi-stage BPG Neural Structure shows very good accuracy and flexibility when tested with 2D objects used in the discontinuous production. The reached speed and the opportunuty for an easy restructuring and reprogramming of the system makes it suitable for application in different applied systems for real time work.

  11. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  12. Automated target recognition and tracking using an optical pattern recognition neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1991-01-01

    The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.

  13. The neural correlates of gist-based true and false recognition

    Science.gov (United States)

    Gutchess, Angela H.; Schacter, Daniel L.

    2012-01-01

    When information is thematically related to previously studied information, gist-based processes contribute to false recognition. Using functional MRI, we examined the neural correlates of gist-based recognition as a function of increasing numbers of studied exemplars. Sixteen participants incidentally encoded small, medium, and large sets of pictures, and we compared the neural response at recognition using parametric modulation analyses. For hits, regions in middle occipital, middle temporal, and posterior parietal cortex linearly modulated their activity according to the number of related encoded items. For false alarms, visual, parietal, and hippocampal regions were modulated as a function of the encoded set size. The present results are consistent with prior work in that the neural regions supporting veridical memory also contribute to false memory for related information. The results also reveal that these regions respond to the degree of relatedness among similar items, and implicate perceptual and constructive processes in gist-based false memory. PMID:22155331

  14. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    Directory of Open Access Journals (Sweden)

    Srdjan Sladojevic

    2016-01-01

    Full Text Available The latest generation of convolutional neural networks (CNNs has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  15. A Research of Speech Emotion Recognition Based on Deep Belief Network and SVM

    Directory of Open Access Journals (Sweden)

    Chenchen Huang

    2014-01-01

    Full Text Available Feature extraction is a very important part in speech emotion recognition, and in allusion to feature extraction in speech emotion recognition problems, this paper proposed a new method of feature extraction, using DBNs in DNN to extract emotional features in speech signal automatically. By training a 5 layers depth DBNs, to extract speech emotion feature and incorporate multiple consecutive frames to form a high dimensional feature. The features after training in DBNs were the input of nonlinear SVM classifier, and finally speech emotion recognition multiple classifier system was achieved. The speech emotion recognition rate of the system reached 86.5%, which was 7% higher than the original method.

  16. Recognition and Evaluation of Clinical Section Headings in Clinical Documents Using Token-Based Formulation with Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Hong-Jie Dai

    2015-01-01

    Full Text Available Electronic health record (EHR is a digital data format that collects electronic health information about an individual patient or population. To enhance the meaningful use of EHRs, information extraction techniques have been developed to recognize clinical concepts mentioned in EHRs. Nevertheless, the clinical judgment of an EHR cannot be known solely based on the recognized concepts without considering its contextual information. In order to improve the readability and accessibility of EHRs, this work developed a section heading recognition system for clinical documents. In contrast to formulating the section heading recognition task as a sentence classification problem, this work proposed a token-based formulation with the conditional random field (CRF model. A standard section heading recognition corpus was compiled by annotators with clinical experience to evaluate the performance and compare it with sentence classification and dictionary-based approaches. The results of the experiments showed that the proposed method achieved a satisfactory F-score of 0.942, which outperformed the sentence-based approach and the best dictionary-based system by 0.087 and 0.096, respectively. One important advantage of our formulation over the sentence-based approach is that it presented an integrated solution without the need to develop additional heuristics rules for isolating the headings from the surrounding section contents.

  17. A Cross-Layer Biometric Recognition System for Mobile IoT Devices

    Directory of Open Access Journals (Sweden)

    Shayan Taheri

    2018-02-01

    Full Text Available A biometric recognition system is one of the leading candidates for the current and the next generation of smart visual systems. The visual system is the engine of the surveillance cameras that have great importance for intelligence and security purposes. These surveillance devices can be a target of adversaries for accomplishing various malicious scenarios such as disabling the camera in critical times or the lack of recognition of a criminal. In this work, we propose a cross-layer biometric recognition system that has small computational complexity and is suitable for mobile Internet of Things (IoT devices. Furthermore, due to the involvement of both hardware and software in realizing this system in a decussate and chaining structure, it is easier to locate and provide alternative paths for the system flow in the case of an attack. For security analysis of this system, one of the elements of this system named the advanced encryption standard (AES is infected by four different Hardware Trojansthat target different parts of this module. The purpose of these Trojans is to sabotage the biometric data that are under process by the biometric recognition system. All of the software and the hardware modules of this system are implemented using MATLAB and Verilog HDL, respectively. According to the performance evaluation results, the system shows an acceptable performance in recognizing healthy biometric data. It is able to detect the infected data, as well. With respect to its hardware results, the system may not contribute significantly to the hardware design parameters of a surveillance camera considering all the hardware elements within the device.

  18. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  19. A Development of Hybrid Drug Information System Using Image Recognition

    Directory of Open Access Journals (Sweden)

    HwaMin Lee

    2015-04-01

    Full Text Available In order to prevent drug abuse or misuse cases and avoid over-prescriptions, it is necessary for medicine taker to be provided with detailed information about the medicine. In this paper, we propose a drug information system and develop an application to provide information through drug image recognition using a smartphone. We designed a contents-based drug image search algorithm using the color, shape and imprint of drug. Our convenient application can provide users with detailed information about drugs and prevent drug misuse.

  20. Evaluating a voice recognition system: finding the right product for your department.

    Science.gov (United States)

    Freeh, M; Dewey, M; Brigham, L

    2001-06-01

    The Department of Radiology at the University of Utah Health Sciences Center has been in the process of transitioning from the traditional film-based department to a digital imaging department for the past 2 years. The department is now transitioning from the traditional method of dictating reports (dictation by radiologist to transcription to review and signing by radiologist) to a voice recognition system. The transition to digital operations will not be complete until we have the ability to directly interface the dictation process with the image review process. Voice recognition technology has advanced to the level where it can and should be an integral part of the new way of working in radiology and is an integral part of an efficient digital imaging department. The transition to voice recognition requires the task of identifying the product and the company that will best meet a department's needs. This report introduces the methods we used to evaluate the vendors and the products available as we made our purchasing decision. We discuss our evaluation method and provide a checklist that can be used by other departments to assist with their evaluation process. The criteria used in the evaluation process fall into the following major categories: user operations, technical infrastructure, medical dictionary, system interfaces, service support, cost, and company strength. Conclusions drawn from our evaluation process will be detailed, with the intention being to shorten the process for others as they embark on a similar venture. As more and more organizations investigate the many products and services that are now being offered to enhance the operations of a radiology department, it becomes increasingly important that solid methods are used to most effectively evaluate the new products. This report should help others complete the task of evaluating a voice recognition system and may be adaptable to other products as well.

  1. Automatic Recognition of Chinese Personal Name Using Conditional Random Fields and Knowledge Base

    Directory of Open Access Journals (Sweden)

    Chuan Gu

    2015-01-01

    Full Text Available According to the features of Chinese personal name, we present an approach for Chinese personal name recognition based on conditional random fields (CRF and knowledge base in this paper. The method builds multiple features of CRF model by adopting Chinese character as processing unit, selects useful features based on selection algorithm of knowledge base and incremental feature template, and finally implements the automatic recognition of Chinese personal name from Chinese document. The experimental results on open real corpus demonstrated the effectiveness of our method and obtained high accuracy rate and high recall rate of recognition.

  2. Comparing source-based and gist-based false recognition in aging and Alzheimer's disease.

    Science.gov (United States)

    Pierce, Benton H; Sullivan, Alison L; Schacter, Daniel L; Budson, Andrew E

    2005-07-01

    This study examined 2 factors contributing to false recognition of semantic associates: errors based on confusion of source and errors based on general similarity information or gist. The authors investigated these errors in patients with Alzheimer's disease (AD), age-matched control participants, and younger adults, focusing on each group's ability to use recollection of source information to suppress false recognition. The authors used a paradigm consisting of both deep and shallow incidental encoding tasks, followed by study of a series of categorized lists in which several typical exemplars were omitted. Results showed that healthy older adults were able to use recollection from the deep processing task to some extent but less than that used by younger adults. In contrast, false recognition in AD patients actually increased following the deep processing task, suggesting that they were unable to use recollection to oppose familiarity arising from incidental presentation. (c) 2005 APA, all rights reserved.

  3. Designing a Low-Resolution Face Recognition System for Long-Range Surveillance

    NARCIS (Netherlands)

    Peng, Y.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2016-01-01

    Most face recognition systems deal well with high-resolution facial images, but perform much worse on low-resolution facial images. In low-resolution face recognition, there is a specific but realistic surveillance scenario: a surveillance camera monitoring a large area. In this scenario, usually

  4. Facial recognition in education system

    Science.gov (United States)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  5. Incremental Learning for Place Recognition in Dynamic Environments

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  6. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  7. Device-Free Indoor Activity Recognition System

    Directory of Open Access Journals (Sweden)

    Mohammed Abdulaziz Aide Al-qaness

    2016-11-01

    Full Text Available In this paper, we explore the properties of the Channel State Information (CSI of WiFi signals and present a device-free indoor activity recognition system. Our proposed system uses only one ubiquitous router access point and a laptop as a detection point, while the user is free and neither needs to wear sensors nor carry devices. The proposed system recognizes six daily activities, such as walk, crawl, fall, stand, sit, and lie. We have built the prototype with an effective feature extraction method and a fast classification algorithm. The proposed system has been evaluated in a real and complex environment in both line-of-sight (LOS and none-line-of-sight (NLOS scenarios, and the results validate the performance of the proposed system.

  8. Episodic Reasoning for Vision-Based Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Maria J. Santofimia

    2014-01-01

    Full Text Available Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning.

  9. Application of Business Process Management to drive the deployment of a speech recognition system in a healthcare organization.

    Science.gov (United States)

    González Sánchez, María José; Framiñán Torres, José Manuel; Parra Calderón, Carlos Luis; Del Río Ortega, Juan Antonio; Vigil Martín, Eduardo; Nieto Cervera, Jaime

    2008-01-01

    We present a methodology based on Business Process Management to guide the development of a speech recognition system in a hospital in Spain. The methodology eases the deployment of the system by 1) involving the clinical staff in the process, 2) providing the IT professionals with a description of the process and its requirements, 3) assessing advantages and disadvantages of the speech recognition system, as well as its impact in the organisation, and 4) help reorganising the healthcare process before implementing the new technology in order to identify how it can better contribute to the overall objective of the organisation.

  10. Face recognition in the thermal infrared domain

    Science.gov (United States)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  11. Hand-Geometry Recognition Based on Contour Parameters

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Bazen, A.M.; Booij, W.D.T.; Hendrikse, A.J.; Jain, A.K.; Ratha, N.K.

    This paper demonstrates the feasibility of a new method of hand-geometry recognition based on parameters derived from the contour of the hand. The contour is completely determined by the black-and-white image of the hand and can be derived from it by means of simple image-processing techniques. It

  12. Infrared and visible fusion face recognition based on NSCT domain

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  13. A novel handwritten character recognition system using gradient ...

    Indian Academy of Sciences (India)

    The issues faced by the handwritten character recognition systems are the similarity. ∗ ... tical/structural features have also been successfully used in character ..... The coordinates (xc, yc) of centroid are calculated by equations (4) and (5). xc =.

  14. Chinese character recognition based on Gabor feature extraction and CNN

    Science.gov (United States)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  15. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2015-07-01

    Full Text Available Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.

  16. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors.

    Science.gov (United States)

    Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung

    2015-07-13

    Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.

  17. Implementing an excellence in teaching recognition system: needs analysis and recommendations.

    Science.gov (United States)

    Schindler, Nancy; Corcoran, Julia C; Miller, Megan; Wang, Chih-Hsiung; Roggin, Kevin; Posner, Mitchell; Fryer, Jonathan; DaRosa, Debra A

    2013-01-01

    Teaching awards have been suggested to serve a variety of purposes. The specific characteristics of teaching awards and the associated effectiveness at achieving planned purposes are poorly understood. A needs analysis was performed to inform recommendations for an Excellence in Teaching Recognition System to meet the needs of surgical education leadership. We performed a 2-part needs analysis beginning with a review of the literature. We then, developed, piloted, and administered a survey instrument to General Surgery program leaders. The survey examined the features and perceived effectiveness of existing teaching awards systems. A multi-institution committee of program directors, clerkship directors, and Vice-Chairs of education then met to identify goals and develop recommendations for implementation of an "Excellence in Teaching Recognition System." There is limited evidence demonstrating effectiveness of existing teaching awards in medical education. Evidence supports the ability of such awards to demonstrate value placed on teaching, to inspire faculty to teach, and to contribute to promotion. Survey findings indicate that existing awards strive to achieve these purposes and that educational leaders believe awards have the potential to do this and more. Leaders are moderately satisfied with existing awards for providing recognition and demonstrating value placed on teaching, but they are less satisfied with awards for motivating faculty to participate in teaching or for contributing to promotion. Most departments and institutions honor only a few recipients annually. There is a paucity of literature addressing teaching recognition systems in medical education and little evidence to support the success of such systems in achieving their intended purposes. The ability of awards to affect outcomes such as participation in teaching and promotion may be limited by the small number of recipients for most existing awards. We propose goals for a Teaching Recognition

  18. Analysis of Documentation Speed Using Web-Based Medical Speech Recognition Technology: Randomized Controlled Trial.

    Science.gov (United States)

    Vogel, Markus; Kaisers, Wolfgang; Wassmuth, Ralf; Mayatepek, Ertan

    2015-11-03

    Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser's text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted

  19. StreamAR: incremental and active learning with evolving sensory data for activity recognition

    OpenAIRE

    Abdallah, Z.; Gaber, M.; Srinivasan, B.; Krishnaswamy, S.

    2012-01-01

    Activity recognition focuses on inferring current user activities by leveraging sensory data available on today’s sensor rich environment. Supervised learning has been applied pervasively for activity recognition. Typical activity recognition techniques process sensory data based on point-by-point approaches. In this paper, we propose a novel cluster-based classification for activity recognition Systems, termed StreamAR. The system incorporates incremental and active learning for mining user ...

  20. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    Science.gov (United States)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes

  1. Palm vein recognition based on directional empirical mode decomposition

    Science.gov (United States)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  2. Improving a Deep Learning based RGB-D Object Recognition Model by Ensemble Learning

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Nasrollahi, Kamal; Heder, Thomas

    2018-01-01

    Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to im- prove the performance of visual recognition models is ensemble learning. However, this method has not been widely explored...... in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments...

  3. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    Science.gov (United States)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  4. Hypergraph-Based Recognition Memory Model for Lifelong Experience

    Science.gov (United States)

    2014-01-01

    Cognitive agents are expected to interact with and adapt to a nonstationary dynamic environment. As an initial process of decision making in a real-world agent interaction, familiarity judgment leads the following processes for intelligence. Familiarity judgment includes knowing previously encoded data as well as completing original patterns from partial information, which are fundamental functions of recognition memory. Although previous computational memory models have attempted to reflect human behavioral properties on the recognition memory, they have been focused on static conditions without considering temporal changes in terms of lifelong learning. To provide temporal adaptability to an agent, in this paper, we suggest a computational model for recognition memory that enables lifelong learning. The proposed model is based on a hypergraph structure, and thus it allows a high-order relationship between contextual nodes and enables incremental learning. Through a simulated experiment, we investigate the optimal conditions of the memory model and validate the consistency of memory performance for lifelong learning. PMID:25371665

  5. Effects of emotional and perceptual-motor stress on a voice recognition system's accuracy: An applied investigation

    Science.gov (United States)

    Poock, G. K.; Martin, B. J.

    1984-02-01

    This was an applied investigation examining the ability of a speech recognition system to recognize speakers' inputs when the speakers were under different stress levels. Subjects were asked to speak to a voice recognition system under three conditions: (1) normal office environment, (2) emotional stress, and (3) perceptual-motor stress. Results indicate a definite relationship between voice recognition system performance and the type of low stress reference patterns used to achieve recognition.

  6. A single-system model predicts recognition memory and repetition priming in amnesia.

    Science.gov (United States)

    Berry, Christopher J; Kessels, Roy P C; Wester, Arie J; Shanks, David R

    2014-08-13

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. Copyright © 2014 the authors 0270-6474/14/3410963-12$15.00/0.

  7. Recognition of sign language with an inertial sensor-based data glove.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam

    2015-01-01

    Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.

  8. Defect Pattern Recognition Based on Partial Discharge Characteristics of Oil-Pressboard Insulation for UHVDC Converter Transformer

    Directory of Open Access Journals (Sweden)

    Wen Si

    2018-03-01

    Full Text Available The ultra high voltage direct current (UHVDC transmission system has advantages in delivering electrical energy over long distance at high capacity. UHVDC converter transformer is a key apparatus and its insulation state greatly affects the safe operation of the transmission system. Partial discharge (PD characteristics of oil-pressboard insulation under combined AC-DC voltage are the foundation for analyzing the insulation state of UHVDC converter transformers. The defect pattern recognition based on PD characteristics is an important part of the state monitoring of converter transformers. In this paper, PD characteristics are investigated with the established experimental platform of three defect models (needle-plate, surface discharge and air gap under 1:1 combined AC-DC voltage. The different PD behaviors of three defect models are discussed and explained through simulation of electric field strength distribution and discharge mechanism. For the recognition of defect types when multiple types of sources coexist, the Random Forests algorithm is used for recognition. In order to reduce the computational layer and the loss of information caused by the extraction of traditional features, the preprocessed single PD pulses and phase information are chosen to be the features for learning and test. Zero-padding method is discussed for normalizing the features. Based on the experimental data, Random Forests and Least Squares Support Vector Machine are compared in the performance of computing time, recognition accuracy and adaptability. It is proved that Random Forests is more suitable for big data analysis.

  9. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  10. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  11. An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    Ing-Jr Ding

    2014-01-01

    Full Text Available In the past, the kernel of automatic speech recognition (ASR is dynamic time warping (DTW, which is feature-based template matching and belongs to the category technique of dynamic programming (DP. Although DTW is an early developed ASR technique, DTW has been popular in lots of applications. DTW is playing an important role for the known Kinect-based gesture recognition application now. This paper proposed an intelligent speech recognition system using an improved DTW approach for multimedia and home automation services. The improved DTW presented in this work, called HMM-like DTW, is essentially a hidden Markov model- (HMM- like method where the concept of the typical HMM statistical model is brought into the design of DTW. The developed HMM-like DTW method, transforming feature-based DTW recognition into model-based DTW recognition, will be able to behave as the HMM recognition technique and therefore proposed HMM-like DTW with the HMM-like recognition model will have the capability to further perform model adaptation (also known as speaker adaptation. A series of experimental results in home automation-based multimedia access service environments demonstrated the superiority and effectiveness of the developed smart speech recognition system by HMM-like DTW.

  12. An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jun Huang

    2014-01-01

    Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.

  13. Hand biometric recognition based on fused hand geometry and vascular patterns.

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  14. FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian

  15. The effect of image resolution on the performance of a face recognition system

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding

  16. Appropriate baseline values for HMM-based speech recognition

    CSIR Research Space (South Africa)

    Barnard, E

    2004-11-01

    Full Text Available A number of issues realted to the development of speech-recognition systems with Hidden Markov Models (HMM) are discussed. A set of systematic experiments using the HTK toolkit and the TMIT database are used to elucidate matters such as the number...

  17. Enhancing spoken connected-digit recognition accuracy by error ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    nition systems have gained acceptable accuracy levels, the accuracy of recognition of current connected ... bar code and ISBN1 library code to name a few. ..... Kopec G, Bush M 1985 Network-based connected-digit recognition. IEEE Trans.

  18. Visual Localization by Place Recognition Based on Multifeature (D-λLBP++HOG

    Directory of Open Access Journals (Sweden)

    Yongliang Qiao

    2017-01-01

    Full Text Available Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS. This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP to obtain a robust global image description (D-CSLBP. In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such as perceptual aliasing. It improves visual recognition performance by taking advantage of depth, texture, and shape information. In addition, for real-time visual localization, local sensitive hashing method (LSH was used to compress the high-dimensional multifeature into binary vectors. It can thus speed up the process of image matching. To show its effectiveness, the proposed method is tested and evaluated using real datasets acquired in outdoor environments. Given the obtained results, our approach allows more effective visual localization compared with the state-of-the-art method FAB-MAP.

  19. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  20. GENDER RECOGNITION BASED ON SIFT FEATURES

    OpenAIRE

    Sahar Yousefi; Morteza Zahedi

    2011-01-01

    This paper proposes a robust approach for face detection and gender classification in color images. Previous researches about gender recognition suppose an expensive computational and time-consuming pre-processing step in order to alignment in which face images are aligned so that facial landmarks like eyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based on mathematical analysis is represented in three stages that eliminates align...

  1. Compact Acoustic Models for Embedded Speech Recognition

    Directory of Open Access Journals (Sweden)

    Lévy Christophe

    2009-01-01

    Full Text Available Speech recognition applications are known to require a significant amount of resources. However, embedded speech recognition only authorizes few KB of memory, few MIPS, and small amount of training data. In order to fit the resource constraints of embedded applications, an approach based on a semicontinuous HMM system using state-independent acoustic modelling is proposed. A transformation is computed and applied to the global model in order to obtain each HMM state-dependent probability density functions, authorizing to store only the transformation parameters. This approach is evaluated on two tasks: digit and voice-command recognition. A fast adaptation technique of acoustic models is also proposed. In order to significantly reduce computational costs, the adaptation is performed only on the global model (using related speaker recognition adaptation techniques with no need for state-dependent data. The whole approach results in a relative gain of more than 20% compared to a basic HMM-based system fitting the constraints.

  2. A survey on vision-based human action recognition

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human–computer interaction. The task is challenging due to variations in motion

  3. RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Nikisins, Olegs; Nasrollahi, Kamal; Greitans, Modris

    2014-01-01

    Facial images are of critical importance in many real-world applications from gaming to surveillance. The current literature on facial image analysis, from face detection to face and facial expression recognition, are mainly performed in either RGB, Depth (D), or both of these modalities. But......, such analyzes have rarely included Thermal (T) modality. This paper paves the way for performing such facial analyzes using synchronized RGB-D-T facial images by introducing a database of 51 persons including facial images of different rotations, illuminations, and expressions. Furthermore, a face recognition...... algorithm has been developed to use these images. The experimental results show that face recognition using such three modalities provides better results compared to face recognition in any of such modalities in most of the cases....

  4. Implicit recognition based on lateralized perceptual fluency.

    Science.gov (United States)

    Vargas, Iliana M; Voss, Joel L; Paller, Ken A

    2012-02-06

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  5. Human action recognition using trajectory-based representation

    Directory of Open Access Journals (Sweden)

    Haiam A. Abdul-Azim

    2015-07-01

    Full Text Available Recognizing human actions in video sequences has been a challenging problem in the last few years due to its real-world applications. A lot of action representation approaches have been proposed to improve the action recognition performance. Despite the popularity of local features-based approaches together with “Bag-of-Words” model for action representation, it fails to capture adequate spatial or temporal relationships. In an attempt to overcome this problem, a trajectory-based local representation approaches have been proposed to capture the temporal information. This paper introduces an improvement of trajectory-based human action recognition approaches to capture discriminative temporal relationships. In our approach, we extract trajectories by tracking the detected spatio-temporal interest points named “cuboid features” with matching its SIFT descriptors over the consecutive frames. We, also, propose a linking and exploring method to obtain efficient trajectories for motion representation in realistic conditions. Then the volumes around the trajectories’ points are described to represent human actions based on the Bag-of-Words (BOW model. Finally, a support vector machine is used to classify human actions. The effectiveness of the proposed approach was evaluated on three popular datasets (KTH, Weizmann and UCF sports. Experimental results showed that the proposed approach yields considerable performance improvement over the state-of-the-art approaches.

  6. Enhancing Speech Recognition Using Improved Particle Swarm Optimization Based Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Lokesh Selvaraj

    2014-01-01

    Full Text Available Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO is suggested. The suggested methodology contains four stages, namely, (i denoising, (ii feature mining (iii, vector quantization, and (iv IPSO based hidden Markov model (HMM technique (IP-HMM. At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC, mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  7. Handwritten Character Recognition Based on the Specificity and the Singularity of the Arabic Language

    Directory of Open Access Journals (Sweden)

    Youssef Boulid

    2017-08-01

    Full Text Available A good Arabic handwritten recognition system must consider the characteristics of Arabic letters which can be explicit such as the presence of diacritics or implicit such as the baseline information (a virtual line on which cursive text are aligned and/join. In order to find an adequate method of features extraction, we have taken into consideration the nature of the Arabic characters. The paper investigate two methods based on two different visions: one describes the image in terms of the distribution of pixels, and the other describes it in terms of local patterns. Spatial Distribution of Pixels (SDP is used according to the first vision; whereas Local Binary Patterns (LBP are used for the second one. Tested on the Arabic portion of the Isolated Farsi Handwritten Character Database (IFHCDB and using neural networks as a classifier, SDP achieve a recognition rate around 94% while LBP achieve a recognition rate of about 96%.

  8. Impact of a voice recognition system on report cycle time and radiologist reading time

    Science.gov (United States)

    Melson, David L.; Brophy, Robert; Blaine, G. James; Jost, R. Gilbert; Brink, Gary S.

    1998-07-01

    Because of its exciting potential to improve clinical service, as well as reduce costs, a voice recognition system for radiological dictation was recently installed at our institution. This system will be clinically successful if it dramatically reduces radiology report turnaround time without substantially affecting radiologist dictation and editing time. This report summarizes an observer study currently under way in which radiologist reporting times using the traditional transcription system and the voice recognition system are compared. Four radiologists are observed interpreting portable intensive care unit (ICU) chest examinations at a workstation in the chest reading area. Data are recorded with the radiologists using the transcription system and using the voice recognition system. The measurements distinguish between time spent performing clerical tasks and time spent actually dictating the report. Editing time and the number of corrections made are recorded. Additionally, statistics are gathered to assess the voice recognition system's impact on the report cycle time -- the time from report dictation to availability of an edited and finalized report -- and the length of reports.

  9. AUTOMATIC SPEECH RECOGNITION SYSTEM CONCERNING THE MOROCCAN DIALECTE (Darija and Tamazight)

    OpenAIRE

    A. EL GHAZI; C. DAOUI; N. IDRISSI

    2012-01-01

    In this work we present an automatic speech recognition system for Moroccan dialect mainly: Darija (Arab dialect) and Tamazight. Many approaches have been used to model the Arabic and Tamazightphonetic units. In this paper, we propose to use the hidden Markov model (HMM) for modeling these phoneticunits. Experimental results show that the proposed approach further improves the recognition.

  10. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    Science.gov (United States)

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  11. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  12. Study on recognition algorithm for paper currency numbers based on neural network

    Science.gov (United States)

    Li, Xiuyan; Liu, Tiegen; Li, Yuanyao; Zhang, Zhongchuan; Deng, Shichao

    2008-12-01

    Based on the unique characteristic, the paper currency numbers can be put into record and the automatic identification equipment for paper currency numbers is supplied to currency circulation market in order to provide convenience for financial sectors to trace the fiduciary circulation socially and provide effective supervision on paper currency. Simultaneously it is favorable for identifying forged notes, blacklisting the forged notes numbers and solving the major social problems, such as armor cash carrier robbery, money laundering. For the purpose of recognizing the paper currency numbers, a recognition algorithm based on neural network is presented in the paper. Number lines in original paper currency images can be draw out through image processing, such as image de-noising, skew correction, segmentation, and image normalization. According to the different characteristics between digits and letters in serial number, two kinds of classifiers are designed. With the characteristics of associative memory, optimization-compute and rapid convergence, the Discrete Hopfield Neural Network (DHNN) is utilized to recognize the letters; with the characteristics of simple structure, quick learning and global optimum, the Radial-Basis Function Neural Network (RBFNN) is adopted to identify the digits. Then the final recognition results are obtained by combining the two kinds of recognition results in regular sequence. Through the simulation tests, it is confirmed by simulation results that the recognition algorithm of combination of two kinds of recognition methods has such advantages as high recognition rate and faster recognition simultaneously, which is worthy of broad application prospect.

  13. Speech emotion recognition methods: A literature review

    Science.gov (United States)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  14. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  15. Image-based automatic recognition of larvae

    Science.gov (United States)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  16. Human body contour data based activity recognition.

    Science.gov (United States)

    Myagmarbayar, Nergui; Yuki, Yoshida; Imamoglu, Nevrez; Gonzalez, Jose; Otake, Mihoko; Yu, Wenwei

    2013-01-01

    This research work is aimed to develop autonomous bio-monitoring mobile robots, which are capable of tracking and measuring patients' motions, recognizing the patients' behavior based on observation data, and providing calling for medical personnel in emergency situations in home environment. The robots to be developed will bring about cost-effective, safe and easier at-home rehabilitation to most motor-function impaired patients (MIPs). In our previous research, a full framework was established towards this research goal. In this research, we aimed at improving the human activity recognition by using contour data of the tracked human subject extracted from the depth images as the signal source, instead of the lower limb joint angle data used in the previous research, which are more likely to be affected by the motion of the robot and human subjects. Several geometric parameters, such as, the ratio of height to weight of the tracked human subject, and distance (pixels) between centroid points of upper and lower parts of human body, were calculated from the contour data, and used as the features for the activity recognition. A Hidden Markov Model (HMM) is employed to classify different human activities from the features. Experimental results showed that the human activity recognition could be achieved with a high correct rate.

  17. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  18. Speech Emotion Recognition Based on Power Normalized Cepstral Coefficients in Noisy Conditions

    Directory of Open Access Journals (Sweden)

    M. Bashirpour

    2016-09-01

    Full Text Available Automatic recognition of speech emotional states in noisy conditions has become an important research topic in the emotional speech recognition area, in recent years. This paper considers the recognition of emotional states via speech in real environments. For this task, we employ the power normalized cepstral coefficients (PNCC in a speech emotion recognition system. We investigate its performance in emotion recognition using clean and noisy speech materials and compare it with the performances of the well-known MFCC, LPCC, RASTA-PLP, and also TEMFCC features. Speech samples are extracted from the Berlin emotional speech database (Emo DB and Persian emotional speech database (Persian ESD which are corrupted with 4 different noise types under various SNR levels. The experiments are conducted in clean train/noisy test scenarios to simulate practical conditions with noise sources. Simulation results show that higher recognition rates are achieved for PNCC as compared with the conventional features under noisy conditions.

  19. Adaptive pattern recognition in real-time video-based soccer analysis

    DEFF Research Database (Denmark)

    Schlipsing, Marc; Salmen, Jan; Tschentscher, Marc

    2017-01-01

    are taken into account. Our contribution is twofold: (1) the deliberate use of machine learning and pattern recognition techniques allows us to achieve high classification accuracy in varying environments. We systematically evaluate combinations of image features and learning machines in the given online......Computer-aided sports analysis is demanded by coaches and the media. Image processing and machine learning techniques that allow for "live" recognition and tracking of players exist. But these methods are far from collecting and analyzing event data fully autonomously. To generate accurate results......, human interaction is required at different stages including system setup, calibration, supervision of classifier training, and resolution of tracking conflicts. Furthermore, the real-time constraints are challenging: in contrast to other object recognition and tracking applications, we cannot treat data...

  20. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  1. Sistem Kontrol Akses Berbasis Real Time Face Recognition dan Gender Information

    Directory of Open Access Journals (Sweden)

    Putri Nurmala

    2015-06-01

    Full Text Available Face recognition and gender information is a computer application for automatically identifying or verifying a person's face from a camera to capture a person's face. It is usually used in access control systemsand it can be compared to other biometrics such as finger print identification system or iris. Many of face recognition algorithms have been developed in recent years. Face recognition system and gender information inthis system based on the Principal Component Analysis method (PCA. Computational method has a simple and fast compared with the use of the method requires a lot of learning, such as artificial neural network. In thisaccess control system, relay used and Arduino controller. In this essay focuses on face recognition and gender - based information in real time using the method of Principal Component Analysis ( PCA . The result achievedfrom the application design is the identification of a person’s face with gender using PCA. The results achieved by the application is face recognition system using PCA can obtain good results the 85 % success rate in face recognition with face images that have been tested by a few people and a fairly high degree of accuracy.

  2. PROBABILISTIC APPROACH TO OBJECT DETECTION AND RECOGNITION FOR VIDEOSTREAM PROCESSING

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2017-07-01

    Full Text Available Purpose: The represented research results are aimed to improve theoretical basics of computer vision and artificial intelligence of dynamical system. Proposed approach of object detection and recognition is based on probabilistic fundamentals to ensure the required level of correct object recognition. Methods: Presented approach is grounded at probabilistic methods, statistical methods of probability density estimation and computer-based simulation at verification stage of development. Results: Proposed approach for object detection and recognition for video stream data processing has shown several advantages in comparison with existing methods due to its simple realization and small time of data processing. Presented results of experimental verification look plausible for object detection and recognition in video stream. Discussion: The approach can be implemented in dynamical system within changeable environment such as remotely piloted aircraft systems and can be a part of artificial intelligence in navigation and control systems.

  3. A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition

    Directory of Open Access Journals (Sweden)

    Noor Abdalrazak Shnain

    2017-08-01

    Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.

  4. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  5. Face sketch recognition based on edge enhancement via deep learning

    Science.gov (United States)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  6. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Science.gov (United States)

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  7. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  8. [Research of electroencephalography representational emotion recognition based on deep belief networks].

    Science.gov (United States)

    Yang, Hao; Zhang, Junran; Jiang, Xiaomei; Liu, Fei

    2018-04-01

    In recent years, with the rapid development of machine learning techniques,the deep learning algorithm has been widely used in one-dimensional physiological signal processing. In this paper we used electroencephalography (EEG) signals based on deep belief network (DBN) model in open source frameworks of deep learning to identify emotional state (positive, negative and neutrals), then the results of DBN were compared with support vector machine (SVM). The EEG signals were collected from the subjects who were under different emotional stimuli, and DBN and SVM were adopted to identify the EEG signals with changes of different characteristics and different frequency bands. We found that the average accuracy of differential entropy (DE) feature by DBN is 89.12%±6.54%, which has a better performance than previous research based on the same data set. At the same time, the classification effects of DBN are better than the results from traditional SVM (the average classification accuracy of 84.2%±9.24%) and its accuracy and stability have a better trend. In three experiments with different time points, single subject can achieve the consistent results of classification by using DBN (the mean standard deviation is1.44%), and the experimental results show that the system has steady performance and good repeatability. According to our research, the characteristic of DE has a better classification result than other characteristics. Furthermore, the Beta band and the Gamma band in the emotional recognition model have higher classification accuracy. To sum up, the performances of classifiers have a promotion by using the deep learning algorithm, which has a reference for establishing a more accurate system of emotional recognition. Meanwhile, we can trace through the results of recognition to find out the brain regions and frequency band that are related to the emotions, which can help us to understand the emotional mechanism better. This study has a high academic value and

  9. An Intelligent Systems Approach to Automated Object Recognition: A Preliminary Study

    Science.gov (United States)

    Maddox, Brian G.; Swadley, Casey L.

    2002-01-01

    Attempts at fully automated object recognition systems have met with varying levels of success over the years. However, none of the systems have achieved high enough accuracy rates to be run unattended. One of the reasons for this may be that they are designed from the computer's point of view and rely mainly on image-processing methods. A better solution to this problem may be to make use of modern advances in computational intelligence and distributed processing to try to mimic how the human brain is thought to recognize objects. As humans combine cognitive processes with detection techniques, such a system would combine traditional image-processing techniques with computer-based intelligence to determine the identity of various objects in a scene.

  10. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    Science.gov (United States)

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  11. FPGA IMPLEMENTATION OF ADAPTIVE INTEGRATED SPIKING NEURAL NETWORK FOR EFFICIENT IMAGE RECOGNITION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Pasupathi

    2014-05-01

    Full Text Available Image recognition is a technology which can be used in various applications such as medical image recognition systems, security, defense video tracking, and factory automation. In this paper we present a novel pipelined architecture of an adaptive integrated Artificial Neural Network for image recognition. In our proposed work we have combined the feature of spiking neuron concept with ANN to achieve the efficient architecture for image recognition. The set of training images are trained by ANN and target output has been identified. Real time videos are captured and then converted into frames for testing purpose and the image were recognized. The machine can operate at up to 40 frames/sec using images acquired from the camera. The system has been implemented on XC3S400 SPARTAN-3 Field Programmable Gate Arrays.

  12. Mining Data of Noisy Signal Patterns in Recognition of Gasoline Bio-Based Additives using Electronic Nose

    Directory of Open Access Journals (Sweden)

    Osowski Stanisław

    2017-03-01

    Full Text Available The paper analyses the distorted data of an electronic nose in recognizing the gasoline bio-based additives. Different tools of data mining, such as the methods of data clustering, principal component analysis, wavelet transformation, support vector machine and random forest of decision trees are applied. A special stress is put on the robustness of signal processing systems to the noise distorting the registered sensor signals. A special denoising procedure based on application of discrete wavelet transformation has been proposed. This procedure enables to reduce the error rate of recognition in a significant way. The numerical results of experiments devoted to the recognition of different blends of gasoline have shown the superiority of support vector machine in a noisy environment of measurement.

  13. Optimal pattern synthesis for speech recognition based on principal component analysis

    Science.gov (United States)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  14. Electromyography (EMG) signal recognition using combined discrete wavelet transform based adaptive neuro-fuzzy inference systems (ANFIS)

    Science.gov (United States)

    Arozi, Moh; Putri, Farika T.; Ariyanto, Mochammad; Khusnul Ari, M.; Munadi, Setiawan, Joga D.

    2017-01-01

    People with disabilities are increasing from year to year either due to congenital factors, sickness, accident factors and war. One form of disability is the case of interruptions of hand function. The condition requires and encourages the search for solutions in the form of creating an artificial hand with the ability as a human hand. The development of science in the field of neuroscience currently allows the use of electromyography (EMG) to control the motion of artificial prosthetic hand into the necessary use of EMG as an input signal to control artificial prosthetic hand. This study is the beginning of a significant research planned in the development of artificial prosthetic hand with EMG signal input. This initial research focused on the study of EMG signal recognition. Preliminary results show that the EMG signal recognition using combined discrete wavelet transform and Adaptive Neuro-Fuzzy Inference System (ANFIS) produces accuracy 98.3 % for training and 98.51% for testing. Thus the results can be used as an input signal for Simulink block diagram of a prosthetic hand that will be developed on next study. The research will proceed with the construction of artificial prosthetic hand along with Simulink program controlling and integrating everything into one system.

  15. Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition

    Science.gov (United States)

    Khayat, Omid; Afarideh, Hossein

    2013-04-01

    Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.

  16. RESEARCH ON FOREST FLAME RECOGNITION ALGORITHM BASED ON IMAGE FEATURE

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  17. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  18. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  19. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos

    2011-08-01

    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  20. Evaluation of calix[4]arene tethered Schiff bases for anion recognition

    International Nuclear Information System (INIS)

    Chawla, H.M.; Munjal, Priyanka

    2016-01-01

    Two calix[4]arene tethered Schiff base derivatives (L1 and L2) have been synthesized and their ion recognition capability has been evaluated through NMR, UV–vis and fluorescence spectroscopy. L1 interacts with cyanide ions very selectively to usher a significant change in color and fluorescence intensity. On the other hand L2 does not show selectivity for anion sensing despite having the same functional groups as those present in L1. The differential observations may be attributed to plausible stereo control of anion recognition and tautomerization in the synthesized Schiff base derivatives.

  1. Evaluation of calix[4]arene tethered Schiff bases for anion recognition

    Energy Technology Data Exchange (ETDEWEB)

    Chawla, H.M., E-mail: hmchawla@chemistry.iitd.ac.in; Munjal, Priyanka

    2016-11-15

    Two calix[4]arene tethered Schiff base derivatives (L1 and L2) have been synthesized and their ion recognition capability has been evaluated through NMR, UV–vis and fluorescence spectroscopy. L1 interacts with cyanide ions very selectively to usher a significant change in color and fluorescence intensity. On the other hand L2 does not show selectivity for anion sensing despite having the same functional groups as those present in L1. The differential observations may be attributed to plausible stereo control of anion recognition and tautomerization in the synthesized Schiff base derivatives.

  2. Wavelet-based moment invariants for pattern recognition

    Science.gov (United States)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  3. Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors.

    Science.gov (United States)

    Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-01-14

    Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.

  4. Facial expression recognition in the wild based on multimodal texture features

    Science.gov (United States)

    Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun

    2016-11-01

    Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

  5. A system of automatic speaker recognition on a minicomputer

    International Nuclear Information System (INIS)

    El Chafei, Cherif

    1978-01-01

    This study describes a system of automatic speaker recognition using the pitch of the voice. The pre-treatment consists in the extraction of the speakers' discriminating characteristics taken from the pitch. The programme of recognition gives, firstly, a preselection and then calculates the distance between the speaker's characteristics to be recognized and those of the speakers already recorded. An experience of recognition has been realized. It has been undertaken with 15 speakers and included 566 tests spread over an intermittent period of four months. The discriminating characteristics used offer several interesting qualities. The algorithms concerning the measure of the characteristics on one hand, the speakers' classification on the other hand, are simple. The results obtained in real time with a minicomputer are satisfactory. Furthermore they probably could be improved if we considered other speaker's discriminating characteristics but this was unfortunately not in our possibilities. (author) [fr

  6. Pattern recognition of state variables by neural networks

    International Nuclear Information System (INIS)

    Faria, Eduardo Fernandes; Pereira, Claubia

    1996-01-01

    An artificial intelligence system based on artificial neural networks can be used to classify predefined events and emergency procedures. These systems are being used in different areas. In the nuclear reactors safety, the goal is the classification of events whose data can be processed and recognized by neural networks. In this works we present a preliminary simple system, using neural networks in the recognition of patterns the recognition of variables which define a situation. (author)

  7. Practising verbal maritime communication with computer dialogue systems using automatic speech recognition (My Practice session)

    OpenAIRE

    John, Peter; Wellmann, J.; Appell, J.E.

    2016-01-01

    This My Practice session presents a novel online tool for practising verbal communication in a maritime setting. It is based on low-fi ChatBot simulation exercises which employ computer-based dialogue systems. The ChatBot exercises are equipped with an automatic speech recognition engine specifically designed for maritime communication. The speech input and output functionality enables learners to communicate with the computer freely and spontaneously. The exercises replicate real communicati...

  8. An interactive VR system based on full-body tracking and gesture recognition

    Science.gov (United States)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  9. Cost-Sensitive Learning for Emotion Robust Speaker Recognition

    Directory of Open Access Journals (Sweden)

    Dongdong Li

    2014-01-01

    Full Text Available In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.

  10. Cost-sensitive learning for emotion robust speaker recognition.

    Science.gov (United States)

    Li, Dongdong; Yang, Yingchun; Dai, Weihui

    2014-01-01

    In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.

  11. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  12. Dynamic Recognition of Driver’s Propensity Based on GPS Mobile Sensing Data and Privacy Protection

    OpenAIRE

    Xiaoyuan Wang; Jianqiang Wang; Jinglei Zhang; Jingheng Wang

    2016-01-01

    Driver’s propensity is a dynamic measurement of driver’s emotional preference characteristics in driving process. It is a core parameter to compute driver’s intention and consciousness in safety driving assist system, especially in vehicle collision warning system. It is also an important influence factor to achieve the Driver-Vehicle-Environment Collaborative Wisdom and Control macroscopically. In this paper, dynamic recognition model of driver’s propensity based on support vector machine is...

  13. Driving profile modeling and recognition based on soft computing approach.

    Science.gov (United States)

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  14. Pipeline leakage recognition based on the projection singular value features and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Wei; Zhang, Laibin; Mingda, Wang; Jinqiu, Hu [College of Mechanical and Transportation Engineering, China University of Petroleum, Beijing, (China)

    2010-07-01

    The negative wave pressure method is one of the processes used to detect leaks on oil pipelines. The development of new leakage recognition processes is difficult because it is practically impossible to collect leakage pressure samples. The method of leakage feature extraction and the selection of the recognition model are also important in pipeline leakage detection. This study investigated a new feature extraction approach Singular Value Projection (SVP). It projects the singular value to a standard basis. A new pipeline recognition model based on the multi-class Support Vector Machines was also developed. It was found that SVP is a clear and concise recognition feature of the negative pressure wave. Field experiments proved that the model provided a high recognition accuracy rate. This approach to pipeline leakage detection based on the SVP and SVM has a high application value.

  15. Implicit Recognition Based on Lateralized Perceptual Fluency

    Directory of Open Access Journals (Sweden)

    Iliana M. Vargas

    2012-02-01

    Full Text Available In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this “implicit recognition” results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  16. Automatic target recognition using a feature-based optical neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  17. Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation

    Directory of Open Access Journals (Sweden)

    Carlos Fernández-Llatas

    2013-10-01

    Full Text Available Born in the early nineteen nineties, evidence-based medicine (EBM is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized.

  18. Sleep Enhances Explicit Recollection in Recognition Memory

    Science.gov (United States)

    Drosopoulos, Spyridon; Wagner, Ullrich; Born, Jan

    2005-01-01

    Recognition memory is considered to be supported by two different memory processes, i.e., the explicit recollection of information about a previous event and an implicit process of recognition based on a contextual sense of familiarity. Both types of memory supposedly rely on distinct memory systems. Sleep is known to enhance the consolidation of…

  19. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    Science.gov (United States)

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  20. HMM Adaptation for Improving a Human Activity Recognition System

    Directory of Open Access Journals (Sweden)

    Rubén San-Segundo

    2016-09-01

    Full Text Available When developing a fully automatic system for evaluating motor activities performed by a person, it is necessary to segment and recognize the different activities in order to focus the analysis. This process must be carried out by a Human Activity Recognition (HAR system. This paper proposes a user adaptation technique for improving a HAR system based on Hidden Markov Models (HMMs. This system segments and recognizes six different physical activities (walking, walking upstairs, walking downstairs, sitting, standing and lying down using inertial signals from a smartphone. The system is composed of a feature extractor for obtaining the most relevant characteristics from the inertial signals, a module for training the six HMMs (one per activity, and the last module for segmenting new activity sequences using these models. The user adaptation technique consists of a Maximum A Posteriori (MAP approach that adapts the activity HMMs to the user, using some activity examples from this specific user. The main results on a public dataset have reported a significant relative error rate reduction of more than 30%. In conclusion, adapting a HAR system to the user who is performing the physical activities provides significant improvement in the system’s performance.