WorldWideScience

Sample records for ecg feature extraction

  1. ECG Feature Extraction Techniques - A Survey Approach

    CERN Document Server

    Karpagachelvi, S; Sivakumar, M

    2010-01-01

    ECG Feature Extraction plays a significant role in diagnosing most of the cardiac diseases. One cardiac cycle in an ECG signal consists of the P-QRS-T waves. This feature extraction scheme determines the amplitudes and intervals in the ECG signal for subsequent analysis. The amplitudes and intervals value of P-QRS-T segment determines the functioning of heart of every human. Recently, numerous research and techniques have been developed for analyzing the ECG signal. The proposed schemes were mostly based on Fuzzy Logic Methods, Artificial Neural Networks (ANN), Genetic Algorithm (GA), Support Vector Machines (SVM), and other Signal Analysis techniques. All these techniques and algorithms have their advantages and limitations. This proposed paper discusses various techniques and transformations proposed earlier in literature for extracting feature from an ECG signal. In addition this paper also provides a comparative study of various methods proposed by researchers in extracting the feature from ECG signal.

  2. A harmonic linear dynamical system for prominent ECG feature extraction.

    Science.gov (United States)

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  3. A Harmonic Linear Dynamical System for Prominent ECG Feature Extraction

    Directory of Open Access Journals (Sweden)

    Ngoc Anh Nguyen Thi

    2014-01-01

    Full Text Available Unsupervised mining of electrocardiography (ECG time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  4. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  5. A novel feature extracting method of QRS complex classification for mobile ECG signals

    Science.gov (United States)

    Zhu, Lingyun; Wang, Dong; Huang, Xianying; Wang, Yue

    2007-12-01

    The conventional classification parameters of QRS complex suffer from larger activity rang of patients and lower signal to noise ratio in mobile cardiac telemonitoring system and can not meet the identification needs of ECG signal. Based on individual sinus heart rhythm template built with mobile ECG signals in time window, we present semblance index to extract the classification features of QRS complex precisely and expeditiously. Relative approximation r2 and absolute error r3 are used as estimating parameters of semblance between testing QRS complex and template. The evaluate parameters corresponding to QRS width and types are demonstrated to choose the proper index. The results show that 99.99 percent of the QRS complex for sinus and superventricular ECG signals can be distinguished through r2 but its average accurate ratio is only 46.16%. More than 97.84 percent of QRS complexes are identified using r3 but its accurate ratio to the sinus and superventricular is not better than r2. By the feature parameter of width, only 42.65 percent of QRS complexes are classified correctly, but its accurate ratio to the ventricular is superior to r2. To combine the respective superiority of three parameters, a nonlinear weighing computation of QRS width, r2 and r3 is introduced and the total classification accuracy up to 99.48% by combing indexes.

  6. Automated diagnosis of congestive heart failure using dual tree complex wavelet transform and statistical features extracted from 2s of ECG signals.

    Science.gov (United States)

    Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San

    2017-02-07

    Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments.

  7. Piezoelectric extraction of ECG signal

    Science.gov (United States)

    Ahmad, Mahmoud Al

    2016-11-01

    The monitoring and early detection of abnormalities or variations in the cardiac cycle functionality are very critical practices and have significant impact on the prevention of heart diseases and their associated complications. Currently, in the field of biomedical engineering, there is a growing need for devices capable of measuring and monitoring a wide range of cardiac cycle parameters continuously, effectively and on a real-time basis using easily accessible and reusable probes. In this paper, the revolutionary generation and extraction of the corresponding ECG signal using a piezoelectric transducer as alternative for the ECG will be discussed. The piezoelectric transducer pick up the vibrations from the heart beats and convert them into electrical output signals. To this end, piezoelectric and signal processing techniques were employed to extract the ECG corresponding signal from the piezoelectric output voltage signal. The measured electrode based and the extracted piezoelectric based ECG traces are well corroborated. Their peaks amplitudes and locations are well aligned with each other.

  8. Sparse Matrix for ECG Identification with Two-Lead Features

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2015-01-01

    Full Text Available Electrocardiograph (ECG human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.

  9. ECG Identification System Using Neural Network with Global and Local Features

    Science.gov (United States)

    Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles

    2016-01-01

    This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…

  10. ECG Signal Feature Selection for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Lichen Xun

    2013-01-01

    Full Text Available This paper aims to study the selection of features based on ECG in emotion recognition. In the process of features selection, we start from existing feature selection algorithm, and pay special attention to some of the intuitive value on ECG waveform as well. Through the use of ANOVA and heuristic search, we picked out the different features to distinguish joy and pleasure these two emotions, then we combine this with pathological analysis of ECG signals by the view of the medical experts to discuss the logic corresponding relation between ECG waveform and emotion distinguish. Through experiment, using the method in this paper we only picked out five features and reached 92% of accuracy rate in the recognition of joy and pleasure.

  11. Extracting Emotion Features from ECG by Using Wavelet Transform%用小波变换提取心电信号的情感特征

    Institute of Scientific and Technical Information of China (English)

    龙正吉; 刘光远

    2011-01-01

    The key element of emotion recognition is to effectively extract emotion features from physiolog ical signals. In this paper, a wavelet transform-based feature extraction is proposed to recognize emotions through ECG (electrocardiogram) signals. After the feature values of four emotion data that derived from one subject on the same day were compared and analyzed, the features that had uniform size relation were regarded as the evidence for emotion recognition. The normalized feature was used to recognize joy and sadness, and the best correct-classification rate could reach 92 %.%采用小波变换的方法对采集自同一被试的不同情感数据样本进行分析,从小波系数中提取心电图信号的情感特征.对同一天采集自同一被试的4种情感的特征进行比较分析,得出大小关系一致的特征作为情感识别依据.选取的特征在归一化之后对高兴和悲伤2类情感分类效果较好,最高可以达到92%.

  12. Fast multi-scale feature fusion for ECG heartbeat classification

    Science.gov (United States)

    Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian

    2015-12-01

    Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.

  13. Study of Feature Extraction Based on Autoregressive Modeling in ECG Automatic Diagnosis%ECG信号自动诊断中回归建模法特征提取的研究

    Institute of Scientific and Technical Information of China (English)

    葛丁飞; 侯北平; 项新建

    2007-01-01

    This article explores the ability of multivariate autoregressive model (MAR) and scalar AR model to extract the features from two-lead electrocardiogram signals in order to classify certain cardiac arrhythmias. The classification performance of four different ECG feature sets based on the model coefficients are shown. The data in the analysis including normal sinus rhythm,atria premature contraction, premature ventricular contraction, ventricular tachycardia, ventricular fibrillation and superventricular tachycardia is obtained from the MIT-BIH database. The classification is performed using a quadratic discriminant function. The results show the MAR coefficients produce the best results among the four ECG representations and the MAR modeling is a useful classification and diagnosis tool.

  14. Enhancement of Twins Fetal ECG Signal Extraction Based on Hybrid Blind Extraction Techniques

    Directory of Open Access Journals (Sweden)

    Ahmed Kareem Abdullah

    2017-07-01

    Full Text Available ECG machines are noninvasive system used to measure the heartbeat signal. It’s very important to monitor the fetus ECG signals during pregnancy to check the heat activity and to detect any problem early before born, therefore the monitoring of ECG signals have clinical significance and importance. For multi-fetal pregnancy case the classical filtering algorithms are not sufficient to separate the ECG signals between mother and fetal. In this paper the mixture consists of mixing from three ECG signals, the first signal is the mother ECG (M-ECG signal, second signal the Fetal-1 ECG (F1-ECG, and third signal is the Fetal-2 ECG (F2-ECG, these signals are extracted based on modified blind source extraction (BSE techniques. The proposed work based on hybridization between two BSE techniques to ensure that the extracted signals separated well. The results demonstrate that the proposed work very efficiently to extract the useful ECG signals

  15. Genetic algorithm for the optimization of features and neural networks in ECG signals classification

    Science.gov (United States)

    Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu

    2017-01-01

    Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.

  16. Extraction of fetal ECG signal by an improved method using extended Kalman smoother framework from single channel abdominal ECG signal.

    Science.gov (United States)

    Panigrahy, D; Sahu, P K

    2017-02-16

    This paper proposes a five-stage based methodology to extract the fetal electrocardiogram (FECG) from the single channel abdominal ECG using differential evolution (DE) algorithm, extended Kalman smoother (EKS) and adaptive neuro fuzzy inference system (ANFIS) framework. The heart rate of the fetus can easily be detected after estimation of the fetal ECG signal. The abdominal ECG signal contains fetal ECG signal, maternal ECG component, and noise. To estimate the fetal ECG signal from the abdominal ECG signal, removal of the noise and the maternal ECG component presented in it is necessary. The pre-processing stage is used to remove the noise from the abdominal ECG signal. The EKS framework is used to estimate the maternal ECG signal from the abdominal ECG signal. The optimized parameters of the maternal ECG components are required to develop the state and measurement equation of the EKS framework. These optimized maternal ECG parameters are selected by the differential evolution algorithm. The relationship between the maternal ECG signal and the available maternal ECG component in the abdominal ECG signal is nonlinear. To estimate the actual maternal ECG component present in the abdominal ECG signal and also to recognize this nonlinear relationship the ANFIS is used. Inputs to the ANFIS framework are the output of EKS and the pre-processed abdominal ECG signal. The fetal ECG signal is computed by subtracting the output of ANFIS from the pre-processed abdominal ECG signal. Non-invasive fetal ECG database and set A of 2013 physionet/computing in cardiology challenge database (PCDB) are used for validation of the proposed methodology. The proposed methodology shows a sensitivity of 94.21%, accuracy of 90.66%, and positive predictive value of 96.05% from the non-invasive fetal ECG database. The proposed methodology also shows a sensitivity of 91.47%, accuracy of 84.89%, and positive predictive value of 92.18% from the set A of PCDB.

  17. Arrhythmia recognition and classification using combined linear and nonlinear features of ECG signals.

    Science.gov (United States)

    Elhaj, Fatin A; Salim, Naomie; Harris, Arief R; Swee, Tan Tian; Ahmed, Taqwa

    2016-04-01

    Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, and an electrocardiogram (ECG) is the non-invasive method used to detect arrhythmias or heart abnormalities. Due to the presence of noise, the non-stationary nature of the ECG signal (i.e. the changing morphology of the ECG signal with respect to time) and the irregularity of the heartbeat, physicians face difficulties in the diagnosis of arrhythmias. The computer-aided analysis of ECG results assists physicians to detect cardiovascular diseases. The development of many existing arrhythmia systems has depended on the findings from linear experiments on ECG data which achieve high performance on noise-free data. However, nonlinear experiments characterize the ECG signal more effectively sense, extract hidden information in the ECG signal, and achieve good performance under noisy conditions. This paper investigates the representation ability of linear and nonlinear features and proposes a combination of such features in order to improve the classification of ECG data. In this study, five types of beat classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation are analyzed: non-ectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable and paced beats (U). The characterization ability of nonlinear features such as high order statistics and cumulants and nonlinear feature reduction methods such as independent component analysis are combined with linear features, namely, the principal component analysis of discrete wavelet transform coefficients. The features are tested for their ability to differentiate different classes of data using different classifiers, namely, the support vector machine and neural network methods with tenfold cross-validation. Our proposed method is able to classify the N, S, V, F and U arrhythmia classes with high accuracy (98.91%) using a combined support

  18. A method of ECG template extraction for biometrics applications.

    Science.gov (United States)

    Zhou, Xiang; Lu, Yang; Chen, Meng; Bao, Shu-Di; Miao, Fen

    2014-01-01

    ECG has attracted widespread attention as one of the most important non-invasive physiological signals in healthcare-system related biometrics for its characteristics like ease-of-monitoring, individual uniqueness as well as important clinical value. This study proposes a method of dynamic threshold setting to extract the most stable ECG waveform as the template for the consequent ECG identification process. With the proposed method, the accuracy of ECG biometrics using the dynamic time wraping for difference measures has been significantly improved. Analysis results with the self-built electrocardiogram database show that the deployment of the proposed method was able to reduce the half total error rate of the ECG biometric system from 3.35% to 1.45%. Its average running time on the platform of android mobile terminal was around 0.06 seconds, and thus demonstrates acceptable real-time performance.

  19. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  20. Extract fetal ECG from single-lead abdominal ECG by de-shape short time Fourier transform and nonlocal median

    CERN Document Server

    Li, Su

    2016-01-01

    The multiple fundamental frequency detection problem and the source separation problem from a single-channel signal containing multiple oscillatory components and a nonstationary noise are both challenging tasks. To extract the fetal electrocardiogram (ECG) from a single-lead maternal abdominal ECG, we face both challenges. In this paper, we propose a novel method to extract the fetal ECG signal from the single channel maternal abdominal ECG signal, without any additional measurement. The algorithm is composed of three main ingredients. First, the maternal and fetal heart rates are estimated by the de-shape short time Fourier transform, which is a recently proposed nonlinear time-frequency analysis technique; second, the beat tracking technique is applied to accurately obtain the maternal and fetal R peaks; third, the maternal and fetal ECG waveforms are established by the nonlocal median. The algorithm is evaluated on a simulated fetal ECG signal database ({\\em fecgsyn} database), and tested on two real data...

  1. Simulation methods for the online extraction of ECG parameters under Matlab/Simulink.

    Science.gov (United States)

    von Wagner, G; Kunzmann, U; Schöchlin, J; Bolz, A

    2002-01-01

    The classification of cardiac pathologies in the human ECG greatly depends on the reliable extraction of characteristic features. This work presents a complete simulation environment for testing ECG classification algorithms under Matlab/Simulink. Evaluation of algorithm performance is undertaken in full compliance with the ANSI/AAMI standards EC38 and EC57, and ranges from beat-to-beat analysis to the comparison of episode markers (e.g., for VT/VF detection algorithms). For testing the quality of waveform boundary detection, our own testing methods have been implemented in compliance with existing literature.

  2. Compressed domain ECG biometric with two-lead features

    Science.gov (United States)

    Lee, Wan-Jou; Chang, Wen-Whei

    2016-07-01

    This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. We target the two-lead ECG configuration that is routinely used in long-term heart monitoring. Incorporation of compressed-domain biometric techniques enables faster person identification as it by-passes the full decompression. Experiments on public ECG databases demonstrate the validity of the proposed method for biometric identification with high accuracies on both healthy and diseased subjects.

  3. Parallel Feature Extraction System

    Institute of Scientific and Technical Information of China (English)

    MAHuimin; WANGYan

    2003-01-01

    Very high speed image processing is needed in some application specially for weapon. In this paper, a high speed image feature extraction system with parallel structure was implemented by Complex programmable logic device (CPLD), and it can realize image feature extraction in several microseconds almost with no delay. This system design is presented by an application instance of flying plane, whose infrared image includes two kinds of feature: geometric shape feature in the binary image and temperature-feature in the gray image. Accordingly the feature extraction is taken on the two kind features. Edge and area are two most important features of the image. Angle often exists in the connection of the different parts of the target's image, which indicates that one area ends and the other area begins. The three key features can form the whole presentation of an image. So this parallel feature extraction system includes three processing modules: edge extraction, angle extraction and area extraction. The parallel structure is realized by a group of processors, every detector is followed by one route of processor, every route has the same circuit form, and works together at the same time controlled by a set of clock to realize feature extraction. The extraction system has simple structure, small volume, high speed, and better stability against noise. It can be used in the war field recognition system.

  4. Selecting Features of Single Lead ECG Signal for Automatic Sleep Stages Classification using Correlation-based Feature Subset Selection

    Directory of Open Access Journals (Sweden)

    Ary Noviyanto

    2011-09-01

    Full Text Available Knowing about our sleep quality will help human life to maximize our life performance. ECG signal has potency to determine the sleep stages so that sleep quality can be measured. The data that used in this research is single lead ECG signal from the MIT-BIH Polysomnographic Database. The ECGs features can be derived from RR interval, EDR information and raw ECG signal. Correlation-based Feature Subset Selection (CFS is used to choose the features which are significant to determine the sleep stages. Those features will be evaluated using four different characteristic classifiers (Bayesian network, multilayer perceptron, IB1 and random forest. Performance evaluations by Bayesian network, IB1 and random forest show that CFS performs excellent. It can reduce the number of features significantly with small decreasing accuracy. The best classification result based on this research is a combination of the feature set derived from raw ECG signal and the random forest classifier.

  5. Fingerprint Feature Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Mehala. G

    2014-03-01

    Full Text Available The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS. FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extracting true minutiae.

  6. Relative Amplitude based Features of characteristic ECG-Peaks for Identification of Coronary Artery Disease

    Science.gov (United States)

    Gohel, Bakul; Tiwary, U. S.; Lahiri, T.

    Coronary artery disease or Myocardial Infarction is the leading cause of death and disability in the world. ECG is widely used as a cheap diagnostic tool for diagnosis of coronary artery disease but has low sensitivity with the present criteria based on ST-segment, T wave and Q wave changes. So to increase the sensitivity of the ECG we have introduced relative amplitude based new features of characteristic ‘R’ and ‘S’ ECG-peaks between two leads. Relative amplitude based features shows remarkable capability in discriminating Myocardial Infarction and Healthy pattern using backpropogation neural network classifier yield results with 81.82% sensitivity and 81.82% specificity. Also relative amplitude might be an efficient method in minimizing the effect of body composition on ECG amplitude based features without use of any information from other than ECG

  7. Fingerprint Feature Extraction Algorithm

    OpenAIRE

    Mehala. G

    2014-01-01

    The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE) algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS). FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extractin...

  8. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    Directory of Open Access Journals (Sweden)

    Park Jinho

    2012-06-01

    Full Text Available Abstract Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1 the area between QRS offset and T-peak points, 2 the normalized and signed sum from QRS offset to effective zero voltage point, and 3 the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE and support vector machine (SVM methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical

  9. Detection of Cardiac Abnormalities from Multilead ECG using Multiscale Phase Alternation Features.

    Science.gov (United States)

    Tripathy, R K; Dandapat, S

    2016-06-01

    The cardiac activities such as the depolarization and the relaxation of atria and ventricles are observed in electrocardiogram (ECG). The changes in the morphological features of ECG are the symptoms of particular heart pathology. It is a cumbersome task for medical experts to visually identify any subtle changes in the morphological features during 24 hours of ECG recording. Therefore, the automated analysis of ECG signal is a need for accurate detection of cardiac abnormalities. In this paper, a novel method for automated detection of cardiac abnormalities from multilead ECG is proposed. The method uses multiscale phase alternation (PA) features of multilead ECG and two classifiers, k-nearest neighbor (KNN) and fuzzy KNN for classification of bundle branch block (BBB), myocardial infarction (MI), heart muscle defect (HMD) and healthy control (HC). The dual tree complex wavelet transform (DTCWT) is used to decompose the ECG signal of each lead into complex wavelet coefficients at different scales. The phase of the complex wavelet coefficients is computed and the PA values at each wavelet scale are used as features for detection and classification of cardiac abnormalities. A publicly available multilead ECG database (PTB database) is used for testing of the proposed method. The experimental results show that, the proposed multiscale PA features and the fuzzy KNN classifier have better performance for detection of cardiac abnormalities with sensitivity values of 78.12 %, 80.90 % and 94.31 % for BBB, HMD and MI classes. The sensitivity value of proposed method for MI class is compared with the state-of-art techniques from multilead ECG.

  10. Hemodynamic, ventilator, and ECG changes in pediatric patients undergoing extraction

    Directory of Open Access Journals (Sweden)

    Y K Sanadhya

    2013-01-01

    Full Text Available Background: Dental treatment induces pain anxiety and fear. This study was conducted to assess the changes in hemodynamic, ventilator, and electrocardiograph changes during extraction procedure among 12-15-year-old children and compare these changes with anxiety, fear, and pain. Materials and Methods: A purposive sample of 60 patients selected based on inclusion and exclusion criteria underwent study procedure in the dental OPD of a medical college and hospital. The anxiety, fear, and pain were recorded by dental anxiety scale, dental fear scale, and visual analogue scale, respectively, before the start of the procedure. The systolic blood pressure, diastolic blood pressure, heart rate, oxygen saturation, and electrocardiogram changes were monitored during the extraction procedure. The recording was taken four times (preinjection phase, injection, extraction, and postextraction and was analyzed. Results: At the preinjection phase the mean vales were systolic blood pressure (128 ± 11.2, diastolic blood pressure (85.7 ± 6.3, heart rate (79.7 ± 9.3, and oxygen saturation (97.9 ± 5.8. These values increased in injection phases and decreased in extraction phase and the least values were found after 10 min of procedure and this relation was significant for all parameters except oxygen saturation (P = 0.48, NS. ECG abnormalities were seen among 22 patients and were significant before and after injection of Local anesthetic (P = 0.0001, S. Conclusions: Anxiety, fear, and pain have an effect on hemodynamic, ventilator, and cardiovascular parameters during the extraction procedure and hence behavioral management has to be emphasized among children in dental clinics.

  11. Unobtrusive monitoring of ECG-derived features during daily smartphone use.

    Science.gov (United States)

    Kwon, Sungjun; Kang, Seungwoo; Lee, Youngki; Yoo, Chungkuk; Park, Kwangsuk

    2014-01-01

    Heart rate variability (HRV) is known to be one of the representative ECG-derived features that are useful for diverse pervasive healthcare applications. The advancement in daily physiological monitoring technology is enabling monitoring of HRV in people's everyday lives. In this study, we evaluate the feasibility of measuring ECG-derived features such as HRV, only using the smartphone-integrated ECG sensors system named Sinabro. We conducted the evaluation with 13 subjects in five predetermined smartphone use cases. The result shows the potential that the smartphone-based sensing system can support daily monitoring of ECG-derived features; The average errors of HRV over all participants ranged from 1.65% to 5.83% (SD: 2.54~10.87) for five use cases. Also, all of individual HRV parameters showed less than 5% of average errors for the three reliable cases.

  12. Driver Fatigue Features Extraction

    Directory of Open Access Journals (Sweden)

    Gengtian Niu

    2014-01-01

    Full Text Available Driver fatigue is the main cause of traffic accidents. How to extract the effective features of fatigue is important for recognition accuracy and traffic safety. To solve the problem, this paper proposes a new method of driver fatigue features extraction based on the facial image sequence. In this method, first, each facial image in the sequence is divided into nonoverlapping blocks of the same size, and Gabor wavelets are employed to extract multiscale and multiorientation features. Then the mean value and standard deviation of each block’s features are calculated, respectively. Considering the facial performance of human fatigue is a dynamic process that developed over time, each block’s features are analyzed in the sequence. Finally, Adaboost algorithm is applied to select the most discriminating fatigue features. The proposed method was tested on a self-built database which includes a wide range of human subjects of different genders, poses, and illuminations in real-life fatigue conditions. Experimental results show the effectiveness of the proposed method.

  13. ECG quality assessment based on a kernel support vector machine and genetic algorithm with a feature matrix

    Institute of Scientific and Technical Information of China (English)

    Ya-tao ZHANG; Cheng-yu LIU; Shou-shui WEI; Chang-zhi WEI; Fei-fei LIU

    2014-01-01

    We propose a systematic ECG quality classification method based on a kernel support vector machine (KSVM) and genetic algorithm (GA) to determine whether ECGs collected via mobile phone are acceptable or not. This method includes mainly three modules, i.e., lead-fall detection, feature extraction, and intelligent classification. First, lead-fall detection is executed to make the initial classification. Then the power spectrum, baseline drifts, amplitude difference, and other time-domain features for ECGs are analyzed and quantified to form the feature matrix. Finally, the feature matrix is assessed using KSVM and GA to determine the ECG quality classification results. A Gaussian radial basis function (GRBF) is employed as the kernel function of KSVM and its performance is compared with that of the Mexican hat wavelet function (MHWF). GA is used to determine the optimal parameters of the KSVM classifier and its performance is compared with that of the grid search (GS) method. The performance of the proposed method was tested on a database from PhysioNet/Computing in Cardiology Challenge 2011, which includes 1500 12-lead ECG recordings. True positive (TP), false positive (FP), and classification accuracy were used as the assessment indices. For training database set A (1000 recordings), the optimal results were obtained using the combination of lead-fall, GA, and GRBF methods, and the corresponding results were:TP 92.89%, FP 5.68%, and classification accuracy 94.00%. For test database set B (500 recordings), the optimal results were also obtained using the combination of lead-fall, GA, and GRBF methods, and the classification accuracy was 91.80%.

  14. Live facial feature extraction

    Institute of Scientific and Technical Information of China (English)

    ZHAO JieYu

    2008-01-01

    Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.

  15. Classification of ECG signals using LDA with factor analysis method as feature reduction technique.

    Science.gov (United States)

    Kaur, Manpreet; Arora, A S

    2012-11-01

    The analysis of ECG signal, especially the QRS complex as the most characteristic wave in ECG, is a widely accepted approach to study and to classify cardiac dysfunctions. In this paper, first wavelet coefficients calculated for QRS complex are taken as features. Next, factor analysis procedures without rotation and with orthogonal rotation (varimax, equimax and quartimax) are used for feature reduction. The procedure uses the 'Principal Component Method' to estimate component loadings. Further, classification has been done with a LDA classifier. The MIT-BIH arrhythmia database is used and five types of beats (normal, PVC, paced, LBBB and RBBB) are considered for analysis. Accuracy, sensitivity and positive predictivity are performance parameters used for comparing performance of feature reduction techniques. Results demonstrate that the equimax rotation method yields maximum average accuracy of 99.056% for unknown data sets among other used methods.

  16. Comparative study of T-amplitude features for fitness monitoring using the ePatch® ECG recorder

    DEFF Research Database (Denmark)

    Thorpe, Julia Rosemary; Saida, Trine; Mehlsen, Jesper

    2014-01-01

    This study investigates ECG features, focusing on T-wave amplitude, from a wearable ECG device as a potential method for fitness monitoring in exercise rehabilitation. An automatic T-peak detection algorithm is presented that uses local baseline detection to overcome baseline drift without the need...

  17. Feature Extraction Using Mfcc

    Directory of Open Access Journals (Sweden)

    Shikha Gupta

    2013-08-01

    Full Text Available Mel Frequency Ceptral Coefficient is a very common and efficient technique for signal processing. Thispaper presents a new purpose of working with MFCC by using it for Hand gesture recognition. Theobjective of using MFCC for hand gesture recognition is to explore the utility of the MFCC for imageprocessing. Till now it has been used in speech recognition, for speaker identification. The present systemis based on converting the hand gesture into one dimensional (1-D signal and then extracting first 13MFCCs from the converted 1-D signal. Classification is performed by using Support Vector Machine.Experimental results represents that proposed application of using MFCC for gesture recognition havevery good accuracy and hence can be used for recognition of sign language or for other householdapplication with the combination for other techniques such as Gabor filter, DWT to increase the accuracyrate and to make it more efficient.

  18. Feature extraction using fractal codes

    NARCIS (Netherlands)

    Schouten, Ben; Zeeuw, Paul M. de

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  19. Feature Extraction Using Fractal Codes

    NARCIS (Netherlands)

    Schouten, B.A.M.; Zeeuw, P.M. de

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  20. Extraction of fetal electrocardiogram (ECG) by extended state Kalman filtering and adaptive neuro-fuzzy inference system (ANFIS) based on single channel abdominal recording

    Indian Academy of Sciences (India)

    D Panigrahy; P K Sahu

    2015-06-01

    Fetal electrocardiogram (ECG) gives information about the health status of fetus and so, an early diagnosis of any cardiac defect before delivery increases the effectiveness of appropriate treatment. In this paper, authors investigate the use of adaptive neuro-fuzzy inference system (ANFIS) with extended Kalman filter for fetal ECG extraction from one ECG signal recorded at the abdominal areas of the mother’s skin. The abdominal ECG is considered to be composite as it contains both mother’s and fetus’ ECG signals. We use extended Kalman filter framework to estimate the maternal component from abdominal ECG. The maternal component in the abdominal ECG signal is a nonlinear transformed version of maternal ECG. ANFIS network has been used to identify this nonlinear relationship, and to align the estimated maternal ECG signal with the maternal component in the abdominal ECG signal. Thus, we extract the fetal ECG component by subtracting the aligned version of the estimated maternal ECG from the abdominal signal. Our results demonstrate the effectiveness of the proposed technique in extracting the fetal ECG component from abdominal signal at different noise levels. The proposed technique is also validated on the extraction of fetal ECG from both actual abdominal recordings and synthetic abdominal recording.

  1. Extraction of fetal heart rate from maternal surface ECG with provisions for multiple pregnancies.

    Science.gov (United States)

    Fanelli, A; Signorini, M G; Heldt, T

    2012-01-01

    Twin pregnancies carry an inherently higher risk than singleton pregnancies due to the increased chances of uterine growth restriction. It is thus desirable to monitor the wellbeing of the fetuses during gestation to detect potentially harmful conditions. The detection of fetal heart rate from the maternal abdominal ECG represents one possible approach for noninvasive and continuous fetal monitoring. Here, we propose a new algorithm for the extraction of twin fetal heart rate signals from maternal abdominal ECG recordings. The algorithm detects the fetal QRS complexes and converts the QRS onset series into a binary signal that is then recursively scanned to separate the contributions from the two fetuses. The algorithm was tested on synthetic singleton and twin abdominal recordings. It achieved an average sensitivity and accuracy for QRS complex detection of 97.5% and 93.6%, respectively.

  2. Fetal ECG extraction via Type-2 adaptive neuro-fuzzy inference systems.

    Science.gov (United States)

    Ahmadieh, Hajar; Asl, Babak Mohammadzadeh

    2017-04-01

    We proposed a noninvasive method for separating the fetal ECG (FECG) from maternal ECG (MECG) by using Type-2 adaptive neuro-fuzzy inference systems. The method can extract FECG components from abdominal signal by using one abdominal channel, including maternal and fetal cardiac signals and other environmental noise signals, and one chest channel. The proposed algorithm detects the nonlinear dynamics of the mother's body. So, the components of the MECG are estimated from the abdominal signal. By subtracting estimated mother cardiac signal from abdominal signal, fetal cardiac signal can be extracted. This algorithm was applied on synthetic ECG signals generated based on the models developed by McSharry et al. and Behar et al. and also on DaISy real database. In environments with high uncertainty, our method performs better than the Type-1 fuzzy method. Specifically, in evaluation of the algorithm with the synthetic data based on McSharry model, for input signals with SNR of -5dB, the SNR of the extracted FECG was improved by 38.38% in comparison with the Type-1 fuzzy method. Also, the results show that increasing the uncertainty or decreasing the input SNR leads to increasing the percentage of the improvement in SNR of the extracted FECG. For instance, when the SNR of the input signal decreases to -30dB, our proposed algorithm improves the SNR of the extracted FECG by 71.06% with respect to the Type-1 fuzzy method. The same results were obtained on synthetic data based on Behar model. Our results on real database reflect the success of the proposed method to separate the maternal and fetal heart signals even if their waves overlap in time. Moreover, the proposed algorithm was applied to the simulated fetal ECG with ectopic beats and achieved good results in separating FECG from MECG. The results show the superiority of the proposed Type-2 neuro-fuzzy inference method over the Type-1 neuro-fuzzy inference and the polynomial networks methods, which is due to its

  3. Feature extraction for speaker diarization

    OpenAIRE

    Negre Rabassa, Enric

    2016-01-01

    Se explorarán y compararán diferentes características de bajo y alto nivel para la diarización automática de locutores Feature extraction for speaker diarization using different databases Extracción de características para la diarización de locutores utilizando diferentes bases de datos Extracció de caracteristiques per a la diarització de locutors utilitzant diferents bases de dades

  4. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    Science.gov (United States)

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.

  5. A model-based approach to human identification using ECG

    Science.gov (United States)

    Homer, Mark; Irvine, John M.; Wendelken, Suzanne

    2009-05-01

    Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also provide a unique biometric signature. Current techniques for identification using ECG rely on empirical methods for extracting features from the ECG signal. This paper presents an alternative approach based on a time-domain model of the ECG trace. Because Auto-Regressive Integrated Moving Average (ARIMA) models form a rich class of descriptors for representing the structure of periodic time series data, they are well-suited to characterizing the ECG signal. We present a method for modeling the ECG, extracting features from the model representation, and identifying individuals using these features.

  6. A multichannel nonlinear adaptive noise canceller based on generalized FLANN for fetal ECG extraction

    Science.gov (United States)

    Ma, Yaping; Xiao, Yegui; Wei, Guo; Sun, Jinwei

    2016-01-01

    In this paper, a multichannel nonlinear adaptive noise canceller (ANC) based on the generalized functional link artificial neural network (FLANN, GFLANN) is proposed for fetal electrocardiogram (FECG) extraction. A FIR filter and a GFLANN are equipped in parallel in each reference channel to respectively approximate the linearity and nonlinearity between the maternal ECG (MECG) and the composite abdominal ECG (AECG). A fast scheme is also introduced to reduce the computational cost of the FLANN and the GFLANN. Two (2) sets of ECG time sequences, one synthetic and one real, are utilized to demonstrate the improved effectiveness of the proposed nonlinear ANC. The real dataset is derived from the Physionet non-invasive FECG database (PNIFECGDB) including 55 multichannel recordings taken from a pregnant woman. It contains two subdatasets that consist of 14 and 8 recordings, respectively, with each recording being 90 s long. Simulation results based on these two datasets reveal, on the whole, that the proposed ANC does enjoy higher capability to deal with nonlinearity between MECG and AECG as compared with previous ANCs in terms of fetal QRS (FQRS)-related statistics and morphology of the extracted FECG waveforms. In particular, for the second real subdataset, the F1-measure results produced by the PCA-based template subtraction (TSpca) technique and six (6) single-reference channel ANCs using LMS- and RLS-based FIR filters, Volterra filter, FLANN, GFLANN, and adaptive echo state neural network (ESN a ) are 92.47%, 93.70%, 94.07%, 94.22%, 94.90%, 94.90%, and 95.46%, respectively. The same F1-measure statistical results from five (5) multi-reference channel ANCs (LMS- and RLS-based FIR filters, Volterra filter, FLANN, and GFLANN) for the second real subdataset turn out to be 94.08%, 94.29%, 94.68%, 94.91%, and 94.96%, respectively. These results indicate that the ESN a and GFLANN perform best, with the ESN a being slightly better than the GFLANN but about four times more

  7. III Lead ECG Pulse Measurement Sensor

    Science.gov (United States)

    Thangaraju, S. K.; Munisamy, K.

    2015-09-01

    Heart rate sensing is very important. Method of measuring heart pulse by using an electrocardiogram (ECG) technique is described. Electrocardiogram is a measurement of the potential difference (the electrical pulse) generated by a cardiac tissue, mainly the heart. This paper also reports the development of a three lead ECG hardware system that would be the basis of developing a more cost efficient, portable and easy to use ECG machine. Einthoven's Three Lead method [1] is used for ECG signal extraction. Using amplifiers such as the instrumentation amplifier AD620BN and the conventional operational amplifier Ua741 that would be used to amplify the ECG signal extracted develop this system. The signal would then be filtered from noise using Butterworth filter techniques to obtain optimum output. Also a right leg guard was implemented as a safety feature to this system. Simulation was carried out for development of the system using P-spice Program.

  8. A method for extracting fetal ECG based on EMD-NMF single channel blind source separation algorithm.

    Science.gov (United States)

    He, Pengju; Chen, Xiaomeng

    2015-01-01

    Nowadays, detecting fetal ECG using abdominal signal is a commonly used method, but fetal ECG signal will be affected by maternal ECG. Current FECG extraction algorithms are mainly aiming at multiple channels signal. They often assume there is only one fetus and did not consider multiple births. This paper proposed a single channel blind source separation algorithm to process single abdominal acquired signal. This algorithm decomposed single abdominal signal into multiple intrinsic mode function (IMF) utilizing empirical mode decomposition (EMD). Correlation matrix of IMF was calculated and independent ECG signal number was estimated using eigenvalue method. Nonnegative matrix was constructed according to determined number and decomposed IMF. Separation of MECG and FECG was achieved utilizing nonnegative matrix factorization (NMF). Experiments selected four channels man-made signal and two channels ECG to verify correctness and feasibility of proposed algorithm. Results showed that the proposed algorithm could determine number of independent signal in single acquired signal. FECG could be extracted from single channel observed signal and the algorithm can be used to solve separation of MECG and FECG.

  9. Matching a wavelet to ECG signal.

    Science.gov (United States)

    Takla, George F; Nair, Bala G; Loparo, Kenneth A

    2006-01-01

    In this paper we develop an approach to synthesize a wavelet that matches the ECG signal. Matching a wavelet to a signal of interest has potential advantages in extracting signal features with greater accuracy, particularly when the signal is contaminated with noise. The approach that we have taken is based on the theoretical work done by Chapa and Rao. We have applied their technique to a noise-free ECG signal representing one cardiac cycle. Results indicate that a matched wavelet, that was able to capture the broad ECG features, could be obtained. Such a wavelet could be used to extract ECG features such as QRS complexes and P&T waves with greater accuracy.

  10. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Bohui Zhu

    2013-01-01

    Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.

  11. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  12. Rapid Feature Extraction for Optical Character Recognition

    CERN Document Server

    Hossain, M Zahid; Yan, Hong

    2012-01-01

    Feature extraction is one of the fundamental problems of character recognition. The performance of character recognition system is depends on proper feature extraction and correct classifier selection. In this article, a rapid feature extraction method is proposed and named as Celled Projection (CP) that compute the projection of each section formed through partitioning an image. The recognition performance of the proposed method is compared with other widely used feature extraction methods that are intensively studied for many different scripts in literature. The experiments have been conducted using Bangla handwritten numerals along with three different well known classifiers which demonstrate comparable results including 94.12% recognition accuracy using celled projection.

  13. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  14. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiong; He Gui-ming; Zhang Yun

    2003-01-01

    In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  15. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang; Xiong; He; Gui-Ming; 等

    2003-01-01

    In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  16. Extraction and assessment of chatter feature

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Presents feature wavelet packets(FWP)a new method of chatter feature extraction in milling process based on wavelet packets transform(WPF)and using vibration signal.Studies the procedure of automatic feature selection for a given process.Establishes an exponential autoregressive(EAR)model to extract limit cycle behavior of chatter since chatter is a nonlinear oscillation with limit cycle.And gives a way to determine FWTsnumber,and experimental data to assess the effectiveness of the WPT feature extraction by unforced response of EAR model of reconstructed signal.

  17. CinC Challenge 2013: comparing three algorithms to extract fetal ECG

    Science.gov (United States)

    Loja, Juan; Velecela, Esteban; Palacio-Baus, Kenneth; Astudillo, Darwin; Medina, Rubén.; Wong, Sara

    2015-12-01

    This paper reports a comparison between three fetal ECG (fECG) detectors developed during the CinC 2013 challenge for fECG detection. Algorithm A1 is based on Independent Component Analysis, A2 is based on fECG detection of RS Slope and A3 is based on Expectation-Weighted Estimation of Fiducial Points. The proposed methodology was validated using the annotated database available for the challenge. Each detector was characterized in terms of its performance by using measures of sensitivity, (Se), positive predictive value (P+) and delay time (td). Additionally, the database was contaminated with white noise for two SNR conditions. Decision fusion was tested considering the most common types of combination of detectors. Results show that the decision fusion of A1 and A2 improves fQRS detection, maintaining high Se and P+ even under low SNR conditions without a significant td increase.

  18. Tongue Image Feature Extraction in TCM

    Institute of Scientific and Technical Information of China (English)

    LI Dong; DU Lian-xiang; LU Fu-ping; DU Jun-ping

    2004-01-01

    In this paper, digital image processing and computer vision techniques are applied to study tongue images for feature extraction with VC++ and Matlab. Extraction and analysis of the tongue surface features are based on shape, color, edge, and texture. The developed software has various functions and good user interface and is easy to use. Feature data for tongue image pattern recognition is provided, which form a sound basis for the future tongue image recognition.

  19. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-08-09

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Novel Technique for Fetal ECG Extraction Using Single-Channel Abdominal Recording

    Directory of Open Access Journals (Sweden)

    Nannan Zhang

    2017-02-01

    Full Text Available Non-invasive fetal electrocardiograms (FECGs are an alternative method to standard means of fetal monitoring which permit long-term continual monitoring. However, in abdominal recording, the FECG amplitude is weak in the temporal domain and overlaps with the maternal electrocardiogram (MECG in the spectral domain. Research in the area of non-invasive separations of FECG from abdominal electrocardiograms (AECGs is in its infancy and several studies are currently focusing on this area. An adaptive noise canceller (ANC is commonly used for cancelling interference in cases where the reference signal only correlates with an interference signal, and not with a signal of interest. However, results from some existing studies suggest that propagation of electrocardiogram (ECG signals from the maternal heart to the abdomen is nonlinear, hence the adaptive filter approach may fail if the thoracic and abdominal MECG lack strict waveform similarity. In this study, singular value decomposition (SVD and smooth window (SW techniques are combined to build a reference signal in an ANC. This is to avoid the limitation that thoracic MECGs recorded separately must be similar to abdominal MECGs in waveform. Validation of the proposed method with r01 and r07 signals from a public dataset, and a self-recorded private dataset showed that the proposed method achieved F1 scores of 99.61%, 99.28% and 98.58%, respectively for the detection of fetal QRS. Compared with four other single-channel methods, the proposed method also achieved higher accuracy values of 99.22%, 98.57% and 97.21%, respectively. The findings from this study suggest that the proposed method could potentially aid accurate extraction of FECG from MECG recordings in both clinical and commercial applications.

  1. A Novel Technique for Fetal ECG Extraction Using Single-Channel Abdominal Recording

    Science.gov (United States)

    Zhang, Nannan; Zhang, Jinyong; Li, Hui; Mumini, Omisore Olatunji; Samuel, Oluwarotimi Williams; Ivanov, Kamen; Wang, Lei

    2017-01-01

    Non-invasive fetal electrocardiograms (FECGs) are an alternative method to standard means of fetal monitoring which permit long-term continual monitoring. However, in abdominal recording, the FECG amplitude is weak in the temporal domain and overlaps with the maternal electrocardiogram (MECG) in the spectral domain. Research in the area of non-invasive separations of FECG from abdominal electrocardiograms (AECGs) is in its infancy and several studies are currently focusing on this area. An adaptive noise canceller (ANC) is commonly used for cancelling interference in cases where the reference signal only correlates with an interference signal, and not with a signal of interest. However, results from some existing studies suggest that propagation of electrocardiogram (ECG) signals from the maternal heart to the abdomen is nonlinear, hence the adaptive filter approach may fail if the thoracic and abdominal MECG lack strict waveform similarity. In this study, singular value decomposition (SVD) and smooth window (SW) techniques are combined to build a reference signal in an ANC. This is to avoid the limitation that thoracic MECGs recorded separately must be similar to abdominal MECGs in waveform. Validation of the proposed method with r01 and r07 signals from a public dataset, and a self-recorded private dataset showed that the proposed method achieved F1 scores of 99.61%, 99.28% and 98.58%, respectively for the detection of fetal QRS. Compared with four other single-channel methods, the proposed method also achieved higher accuracy values of 99.22%, 98.57% and 97.21%, respectively. The findings from this study suggest that the proposed method could potentially aid accurate extraction of FECG from MECG recordings in both clinical and commercial applications. PMID:28245585

  2. High-frequency ECG

    Science.gov (United States)

    Tragardh, Elin; Schlegel, Todd T.

    2006-01-01

    The standard ECG is by convention limited to 0.05-150 Hz, but higher frequencies are also present in the ECG signal. With high-resolution technology, it is possible to record and analyze these higher frequencies. The highest amplitudes of the high-frequency components are found within the QRS complex. In past years, the term "high frequency", "high fidelity", and "wideband electrocardiography" have been used by several investigators to refer to the process of recording ECGs with an extended bandwidth of up to 1000 Hz. Several investigators have tried to analyze HF-QRS with the hope that additional features seen in the QRS complex would provide information enhancing the diagnostic value of the ECG. The development of computerized ECG-recording devices that made it possible to record ECG signals with high resolution in both time and amplitude, as well as better possibilities to store and process the signals digitally, offered new methods for analysis. Different techniques to extract the HF-QRS have been described. Several bandwidths and filter types have been applied for the extraction as well as different signal-averaging techniques for noise reduction. There is no standard method for acquiring and quantifying HF-QRS. The physiological mechanisms underlying HF-QRS are still not fully understood. One theory is that HF-QRS are related to the conduction velocity and the fragmentation of the depolarization wave in the myocardium. In a three-dimensional model of the ventricles with a fractal conduction system it was shown that high numbers of splitting branches are associated with HF-QRS. In this experiment, it was also shown that the changes seen in HF-QRS in patients with myocardial ischemia might be due to the slowing of the conduction velocity in the region of ischemia. This mechanism has been tested by Watanabe et al by infusing sodium channel blockers into the left anterior descending artery in dogs. In their study, 60 unipolar ECGs were recorded from the entire

  3. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  4. COLOR FEATURE EXTRACTION FOR CBIR

    Directory of Open Access Journals (Sweden)

    Dr. H.B.KEKRE

    2011-12-01

    Full Text Available Content Based Image Retrieval is the application of computer vision techniques to the image retrieval problem of searching for digital images in large databases. The method of CBIR discussed in this paper can filter images based their content and would provide a better indexing and return more accurate results. In this paper we wouldbe discussing: Feature vector generation using color averaging technique, Similarity measures and Performance evaluation using randomly selected 5 query images per class out of which result of one class is discussed. Precision –Recall cross over plot is used as the performance evaluation measure to check the algorithm. As thesystem developed is generic, database consists of images from different classes. The effect due to the size of database and number of different classes is seen on the number of relevancy of the retrievals.

  5. Employing ensemble empirical mode decomposition for artifact removal: extracting accurate respiration rates from ECG data during ambulatory activity.

    Science.gov (United States)

    Sweeney, Kevin T; Kearney, Damien; Ward, Tomás E; Coyle, Shirley; Diamond, Dermot

    2013-01-01

    Observation of a patient's respiration signal can provide a clinician with the required information necessary to analyse a subject's wellbeing. Due to an increase in population number and the aging population demographic there is an increasing stress being placed on current healthcare systems. There is therefore a requirement for more of the rudimentary patient testing to be performed outside of the hospital environment. However due to the ambulatory nature of these recordings there is also a desire for a reduction in the number of sensors required to perform the required recording in order to be unobtrusive to the wearer, and also to use textile based systems for comfort. The extraction of a proxy for the respiration signal from a recorded electrocardiogram (ECG) signal has therefore received considerable interest from previous researchers. To allow for accurate measurements, currently employed methods rely on the availability of a clean artifact free ECG signal from which to extract the desired respiration signal. However, ambulatory recordings, made outside of the hospital-centric environment, are often corrupted with contaminating artifacts, the most degrading of which are due to subject motion. This paper presents the use of the ensemble empirical mode decomposition (EEMD) algorithm to aid in the extraction of the desired respiration signal. Two separate techniques are examined; 1) Extraction of the respiration signal directly from the noisy ECG 2) Removal of the artifact components relating to the subject movement allowing for the use of currently available respiration signal detection techniques. Results presented illustrate that the two proposed techniques provide significant improvements in the accuracy of the breaths per minute (BPM) metric when compared to the available true respiration signal. The error reduced from ± 5.9 BPM prior to the use of the two techniques to ± 2.9 and ± 3.3 BPM post processing using the EEMD algorithm techniques.

  6. ECG acquisition and automated remote processing

    CERN Document Server

    Gupta, Rajarshi; Bera, Jitendranath

    2014-01-01

    The book is focused on the area of remote processing of ECG in the context of telecardiology, an emerging area in the field of Biomedical Engineering Application. Considering the poor infrastructure and inadequate numbers of physicians in rural healthcare clinics in India and other developing nations, telemedicine services assume special importance. Telecardiology, a specialized area of telemedicine, is taken up in this book considering the importance of cardiac diseases, which is prevalent in the population under discussion. The main focus of this book is to discuss different aspects of ECG acquisition, its remote transmission and computerized ECG signal analysis for feature extraction. It also discusses ECG compression and application of standalone embedded systems, to develop a cost effective solution of a telecardiology system.

  7. Linguistic feature analysis for protein interaction extraction

    Directory of Open Access Journals (Sweden)

    Cornelis Chris

    2009-11-01

    Full Text Available Abstract Background The rapid growth of the amount of publicly available reports on biomedical experimental results has recently caused a boost of text mining approaches for protein interaction extraction. Most approaches rely implicitly or explicitly on linguistic, i.e., lexical and syntactic, data extracted from text. However, only few attempts have been made to evaluate the contribution of the different feature types. In this work, we contribute to this evaluation by studying the relative importance of deep syntactic features, i.e., grammatical relations, shallow syntactic features (part-of-speech information and lexical features. For this purpose, we use a recently proposed approach that uses support vector machines with structured kernels. Results Our results reveal that the contribution of the different feature types varies for the different data sets on which the experiments were conducted. The smaller the training corpus compared to the test data, the more important the role of grammatical relations becomes. Moreover, deep syntactic information based classifiers prove to be more robust on heterogeneous texts where no or only limited common vocabulary is shared. Conclusion Our findings suggest that grammatical relations play an important role in the interaction extraction task. Moreover, the net advantage of adding lexical and shallow syntactic features is small related to the number of added features. This implies that efficient classifiers can be built by using only a small fraction of the features that are typically being used in recent approaches.

  8. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    Science.gov (United States)

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  9. Custom FPGA processing for real-time fetal ECG extraction and identification.

    Science.gov (United States)

    Torti, E; Koliopoulos, D; Matraxia, M; Danese, G; Leporati, F

    2017-01-01

    Monitoring the fetal cardiac activity during pregnancy is of crucial importance for evaluating fetus health. However, there is a lack of automatic and reliable methods for Fetal ECG (FECG) monitoring that can perform this elaboration in real-time. In this paper, we present a hardware architecture, implemented on the Altera Stratix V FPGA, capable of separating the FECG from the maternal ECG and to correctly identify it. We evaluated our system using both synthetic and real tracks acquired from patients beyond the 20th pregnancy week. This work is part of a project aiming at developing a portable system for FECG continuous real-time monitoring. Its characteristics of reduced power consumption, real-time processing capability and reduced size make it suitable to be embedded in the overall system, that is the first proposed exploiting Blind Source Separation with this technology, to the best of our knowledge.

  10. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data.

    Science.gov (United States)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-08-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described.

  11. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  12. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  13. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  14. Medical Image Feature, Extraction, Selection And Classification

    Directory of Open Access Journals (Sweden)

    M.VASANTHA,

    2010-06-01

    Full Text Available Breast cancer is the most common type of cancer found in women. It is the most frequent form of cancer and one in 22 women in India is likely to suffer from breast cancer. This paper proposes a image classifier to classify the mammogram images. Mammogram image is classified into normal image, benign image and malignant image. Totally 26 features including histogram intensity features and GLCM features are extracted from mammogram image. A hybrid approach of feature selection is proposed in this paper which reduces 75% of the features. Decision tree algorithms are applied to mammography lassification by using these reduced features. Experimental results have been obtained for a data set of 113 images taken from MIAS of different types. This technique of classification has not been attempted before and it reveals the potential of Data mining in medical treatment.

  15. Extraction of essential features by quantum density

    Science.gov (United States)

    Wilinski, Artur

    2016-09-01

    In this paper we consider the problem of feature extraction, as an essential and important search of dataset. This problem describe the real ownership of the signals and images. Searches features are often difficult to identify because of data complexity and their redundancy. Here is shown a method of finding an essential features groups, according to the defined issues. To find the hidden attributes we use a special algorithm DQAL with the quantum density for thej-th features from original data, that indicates the important set of attributes. Finally, they have been generated small sets of attributes for subsets with different properties of features. They can be used to the construction of a small set of essential features. All figures were made in Matlab6.

  16. ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm

    Science.gov (United States)

    Kora, Padmavathi; Sri Rama Krishna, K.

    2016-12-01

    Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.

  17. Classification of ECG Using Chaotic Models

    Directory of Open Access Journals (Sweden)

    Khandakar Mohammad Ishtiak

    2012-09-01

    Full Text Available Chaotic analysis has been shown to be useful in a variety of medical applications, particularly in cardiology. Chaotic parameters have shown potential in the identification of diseases, especially in the analysis of biomedical signals like electrocardiogram (ECG. In this work, underlying chaos in ECG signals has been analyzed using various non-linear techniques. First, the ECG signal is processed through a series of steps to extract the QRS complex. From this extracted feature, bit-to-bit interval (BBI and instantaneous heart rate (IHR have been calculated. Then some nonlinear parameters like standard deviation, and coefficient of variation and nonlinear techniques like central tendency measure (CTM, and phase space portrait have been determined from both the BBI and IHR. Standard database of MIT-BIH is used as the reference data where each ECG record contains 650000 samples. CTM is calculated for both BBI and IHR for each ECG record of the database. A much higher value of CTM for IHR is observed for eleven patients with normal beats with a mean of 0.7737 and SD of 0.0946. On the contrary, the CTM for IHR of eleven patients with abnormal rhythm shows low value with a mean of 0.0833 and SD 0.0748. CTM for BBI of the same eleven normal rhythm records also shows high values with a mean of 0.6172 and SD 0.1472. CTM for BBI of eleven abnormal rhythm records show low values with a mean of 0.0478 and SD 0.0308. Phase space portrait also demonstrates visible attractor with little dispersion for a healthy person’s ECG and a widely dispersed plot in 2-D plane for the ailing person’s ECG. These results indicate that ECG can be classified based on this chaotic modeling which works on the nonlinear dynamics of the system.

  18. Extracting Product Features from Chinese Product Reviews

    Directory of Open Access Journals (Sweden)

    Yahui Xi

    2013-12-01

    Full Text Available With the great development of e-commerce, the number of product reviews grows rapidly on the e-commerce websites. Review mining has recently received a lot of attention, which aims to discover the valuable information from the massive product reviews. Product feature extraction is one of the basic tasks of product review mining. Its effectiveness can influence significantly the performance of subsequent jobs. Double Propagation is a state-of-the-art technique in product feature extraction. In this paper, we apply the Double Propagation to the product feature exaction from Chinese product reviews and adopt some techniques to improve the precision and recall. First, indirect relations and verb product features are introduced to increase the recall. Second, when ranking candidate product features by using HITS, we expand the number of hubs by means of the dependency relation patterns between product features and opinion words to improve the precision. Finally, the Normalized Pattern Relevance is employed to filter the exacted product features. Experiments on diverse real-life datasets show promising results

  19. The Combined Effect of Filters in ECG Signals for Pre-Processing

    OpenAIRE

    Isha V. Upganlawar; Harshal Chowhan

    2014-01-01

    The ECG signal is abruptly changing and continuous in nature. The heart disease such as paroxysmal of heart, arrhythmia diagnosing, are related with the intelligent health care decision this ECG signal need to be pre-process accurately for further action on it such as extracting the features, wavelet decomposition, distribution of QRS complexes in ECG recordings and related information such as heart rate and RR interval, classification of the signal by using various classifiers etc. Filters p...

  20. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA

    2016-05-01

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system

  1. Feature extraction for structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois [Los Alamos National Laboratory; Farrar, Charles [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Nishio, Mayuko [UNIV OF TOKYO; Worden, Keith [UNIV OF SHEFFIELD; Takeda, Nobuo [UNIV OF TOKYO

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  2. Fixed kernel regression for voltammogram feature extraction

    Science.gov (United States)

    Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.

    2009-12-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.

  3. Automatic Melody Generation System with Extraction Feature

    Science.gov (United States)

    Ida, Kenichi; Kozuki, Shinichi

    In this paper, we propose the melody generation system with the analysis result of an existing melody. In addition, we introduce the device that takes user's favor in the system. The melody generation is done by pitch's being arranged best on the given rhythm. The best standard is decided by using the feature element extracted from existing music by proposed method. Moreover, user's favor is reflected in the best standard by operating some of the feature element in users. And, GA optimizes the pitch array based on the standard, and achieves the system.

  4. Online Feature Extraction Algorithms for Data Streams

    Science.gov (United States)

    Ozawa, Seiichi

    Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.

  5. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  6. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation.

    Science.gov (United States)

    Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  7. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation

    Directory of Open Access Journals (Sweden)

    Axel Loewe

    2015-01-01

    Full Text Available In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG, however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs. Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  8. A novel real-time patient-specific seizure diagnosis algorithm based on analysis of EEG and ECG signals using spectral and spatial features and improved particle swarm optimization classifier.

    Science.gov (United States)

    Nasehi, Saadat; Pourghassem, Hossein

    2012-08-01

    This paper proposes a novel real-time patient-specific seizure diagnosis algorithm based on analysis of electroencephalogram (EEG) and electrocardiogram (ECG) signals to detect seizure onset. In this algorithm, spectral and spatial features are selected from seizure and non-seizure EEG signals by Gabor functions and principal component analysis (PCA). Furthermore, four features based on heart rate acceleration are extracted from ECG signals to form feature vector. Then a neural network classifier based on improved particle swarm optimization (IPSO) learning algorithm is developed to determine an optimal nonlinear decision boundary. This classifier allows to adjust the parameters of the neural network classifier, efficiently. This algorithm can automatically detect the presence of seizures with minimum delay which is an important factor from a clinical viewpoint. The performance of the proposed algorithm is evaluated on a dataset consisting of 154 h records and 633 seizures from 12 patients. The results indicate that the algorithm can recognize the seizures with the smallest latency and higher good detection rate (GDR) than other presented algorithms in the literature.

  9. Trace Ratio Criterion for Feature Extraction in Classification

    Directory of Open Access Journals (Sweden)

    Guoqi Li

    2014-01-01

    Full Text Available A generalized linear discriminant analysis based on trace ratio criterion algorithm (GLDA-TRA is derived to extract features for classification. With the proposed GLDA-TRA, a set of orthogonal features can be extracted in succession. Each newly extracted feature is the optimal feature that maximizes the trace ratio criterion function in the subspace orthogonal to the space spanned by the previous extracted features.

  10. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  11. The Combined Effect of Filters in ECG Signals for Pre-Processing

    Directory of Open Access Journals (Sweden)

    Isha V. Upganlawar

    2014-05-01

    Full Text Available The ECG signal is abruptly changing and continuous in nature. The heart disease such as paroxysmal of heart, arrhythmia diagnosing, are related with the intelligent health care decision this ECG signal need to be pre-process accurately for further action on it such as extracting the features, wavelet decomposition, distribution of QRS complexes in ECG recordings and related information such as heart rate and RR interval, classification of the signal by using various classifiers etc. Filters plays very important role in analyzing the low frequency components in ECG signal. The biomedical signals are of low frequency, the removal of power line interference and baseline wander is a very important step at the pre-processing stage of ECG. In these paper we deal with the study of Median filtering and FIR (Finite Impulse Responsefiltering of ECG signals under noisy condition

  12. Extraction of photomultiplier-pulse features

    Energy Technology Data Exchange (ETDEWEB)

    Joerg, Philipp; Baumann, Tobias; Buechele, Maximilian; Fischer, Horst; Gorzellik, Matthias; Grussenmeyer, Tobias; Herrmann, Florian; Kremser, Paul; Kunz, Tobias; Michalski, Christoph; Schopferer, Sebastian; Szameitat, Tobias [Physikalisches Institut der Universitaet Freiburg, Freiburg im Breisgau (Germany)

    2013-07-01

    Experiments in subatomic physics have to handle data rates at several MHz per readout channel to reach statistical significance for the measured quantities. Frequently such experiments have to deal with fast signals which may cover large dynamic ranges. For applications which require amplitude as well as time measurements with highest accuracy transient recorders with very high resolution and deep on-board memory are the first choice. We have built a 16-channel 12- or 14 bit single unit VME64x/VXS sampling ADC module which may sample at rates up to 1GS/s. Fast algorithms have been developed and successfully implemented for the readout of the recoil-proton detector at the COMPASS-II Experiment at CERN. We report on the implementation of the feature extraction algorithms and the performance achieved during a pilot with the COMPASS-II Experiment.

  13. Concrete Slump Classification using GLCM Feature Extraction

    Science.gov (United States)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  14. ECG (image)

    Science.gov (United States)

    ... electrocardiogram (ECG, EKG) is used extensively in the diagnosis of heart disease, ranging from congenital heart disease in infants to myocardial infarction and myocarditis in adults. Several different types of ...

  15. Assessment of extraction parameters on antioxidant capacity, polyphenol content, epigallocatechin gallate (EGCG), epicatechin gallate (ECG) and iriflophenone 3-C-β-glucoside of agarwood (Aquilaria crassna) young leaves.

    Science.gov (United States)

    Tay, Pei Yin; Tan, Chin Ping; Abas, Faridah; Yim, Hip Seng; Ho, Chun Wai

    2014-08-14

    The effects of ethanol concentration (0%-100%, v/v), solid-to-solvent ratio (1:10-1:60, w/v) and extraction time (30-180 min) on the extraction of polyphenols from agarwood (Aquilaria crassna) were examined. Total phenolic content (TPC), total flavonoid content (TFC) and total flavanol (TF) assays and HPLC-DAD were used for the determination and quantification of polyphenols, flavanol gallates (epigallocatechin gallate--EGCG and epicatechin gallate--ECG) and a benzophenone (iriflophenone 3-C-β-glucoside) from the crude polyphenol extract (CPE) of A. crassna. 2,2'-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity was used to evaluate the antioxidant capacity of the CPE. Experimental results concluded that ethanol concentration and solid-to-solvent ratio had significant effects (pantioxidant capacity. Extraction time had an insignificant influence on the recovery of EGCG, ECG and iriflophenone 3-C-β-glucoside, as well as radical scavenging capacity from the CPE. The extraction parameters that exhibited maximum yields were 40% (v/v) ethanol, 1:60 (w/v) for 30 min where the TPC, TFC, TF, DPPH, EGCG, ECG and iriflophenone 3-C-β-glucoside levels achieved were 183.5 mg GAE/g DW, 249.0 mg QE/g DW, 4.9 mg CE/g DW, 93.7%, 29.1 mg EGCG/g DW, 44.3 mg ECG/g DW and 39.9 mg iriflophenone 3-C-β-glucoside/g DW respectively. The IC50 of the CPE was 24.6 mg/L.

  16. Assessment of Extraction Parameters on Antioxidant Capacity, Polyphenol Content, Epigallocatechin Gallate (EGCG, Epicatechin Gallate (ECG and Iriflophenone 3-C-β-Glucoside of Agarwood (Aquilaria crassna Young Leaves

    Directory of Open Access Journals (Sweden)

    Pei Yin Tay

    2014-08-01

    Full Text Available The effects of ethanol concentration (0%–100%, v/v, solid-to-solvent ratio (1:10–1:60, w/v and extraction time (30–180 min on the extraction of polyphenols from agarwood (Aquilaria crassna were examined. Total phenolic content (TPC, total flavonoid content (TFC and total flavanol (TF assays and HPLC-DAD were used for the determination and quantification of polyphenols, flavanol gallates (epigallocatechin gallate—EGCG and epicatechin gallate—ECG and a benzophenone (iriflophenone 3-C-β-glucoside from the crude polyphenol extract (CPE of A. crassna. 2,2'-Diphenyl-1-picrylhydrazyl (DPPH radical scavenging activity was used to evaluate the antioxidant capacity of the CPE. Experimental results concluded that ethanol concentration and solid-to-solvent ratio had significant effects (p < 0.05 on the yields of polyphenol and antioxidant capacity. Extraction time had an insignificant influence on the recovery of EGCG, ECG and iriflophenone 3-C-β-glucoside, as well as radical scavenging capacity from the CPE. The extraction parameters that exhibited maximum yields were 40% (v/v ethanol, 1:60 (w/v for 30 min where the TPC, TFC, TF, DPPH, EGCG, ECG and iriflophenone 3-C-β-glucoside levels achieved were 183.5 mg GAE/g DW, 249.0 mg QE/g DW, 4.9 mg CE/g DW, 93.7%, 29.1 mg EGCG/g DW, 44.3 mg ECG/g DW and 39.9 mg iriflophenone 3-C-β-glucoside/g DW respectively. The IC50 of the CPE was 24.6 mg/L.

  17. A novel approach to ECG classification based upon two-layered HMMs in body sensor networks.

    Science.gov (United States)

    Liang, Wei; Zhang, Yinlong; Tan, Jindong; Li, Yang

    2014-03-27

    This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient's ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS) filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs) are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC) in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN) platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen.

  18. A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks

    Directory of Open Access Journals (Sweden)

    Wei Liang

    2014-03-01

    Full Text Available This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient’s ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen.

  19. HEURISTICAL FEATURE EXTRACTION FROM LIDAR DATA AND THEIR VISUALIZATION

    OpenAIRE

    Ghosh., S; B. Lohani

    2012-01-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clu...

  20. ECG Frequency Domain Features Extraction: A New Characteristic for Arrhythmias Classification

    Science.gov (United States)

    2007-11-02

    25$55+ɟ+0,$6&/$66,),&$7,21 I. Romero, L. Serrano Department of Electrical and Electronic Engineering, Public University of Navarra Campus de...Electronic Engineering Public University of Navarra Campus de Arrosadia, 31006 Pamplona, Spain Performing Organization Report Number Sponsoring

  1. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  2. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  3. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  4. WAVELET ANALYSIS OF ABNORMAL ECGS

    Directory of Open Access Journals (Sweden)

    Vasudha Nannaparaju

    2014-02-01

    Full Text Available Detection of the warning signals by the heart can be diagnosed from ECG. An accurate and reliable diagnosis of ECG is very important however which is cumbersome and at times ambiguous in time domain due to the presence of noise. Study of ECG in wavelet domain using both continuous Wavelet transform (CWT and discrete Wavelet transform (DWT, with well known wavelet as well as a wavelet proposed by the authors for this investigation is found to be useful and yields fairly reliable results. In this study, Wavelet analysis of ECGs of Normal, Hypertensive, Diabetic and Cardiac are carried out. The salient feature of the study is that detection of P and T phases in wavelet domain is feasible which are otherwise feeble or absent in raw ECGs.

  5. Handwritten Character Classification using the Hotspot Feature Extraction Technique

    NARCIS (Netherlands)

    Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2012-01-01

    Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it

  6. Simple electrocardiogram (ECG) signal analyzer for homecare system among the elderly.

    Science.gov (United States)

    Lin, Liuh-Chii; Yeh, Yun-Chi; Ho, Kuei-Jung

    2015-01-01

    This study presents a simple electrocardiogram (ECG) signal analyzer for homecare system among the elderly. It can transmit ECG signals of patient around his/her house through Bluetooth to computers in house. ECG signals are analyzed by the computer. If abnormal case of heartbeat is found, the emergency call is automatically dialed. Meanwhile, the determined heartbeat case of ECG signals will be forwarded to patient's MD through internet. Therefore, the patient can do whatever he/she wants around his/her house with our proposed simple cardiac arrhythmias signal analyzer. The proposed consists of five major processing stages: (i) preprocessing stage for enlarging ECG signals' amplitude and eliminating noises; (ii) ECG signal transmitter/receiver stage, ECG signals are transmitted through Bluetooth to the signal receiver in patient's house; (iii) QRS extraction stage for detecting QRS waveform using the Difference Operation Method (DOM) method; (iv) qualitative features stage for qualitative feature selection on ECG signals; and (v) classification stage for determining patient's heartbeat cases using the Principal Component Analysis (PCA) method. In the experiment, the total classification accuracy (TCA) was approximately 93.19% in average.

  7. Analytical Study of Feature Extraction Techniques in Opinion Mining

    Directory of Open Access Journals (Sweden)

    Pravesh Kumar Singh

    2013-07-01

    Full Text Available Although opinion mining is in a nascent stage of de velopment but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines etc. This paper is an attempt to appraise the vario us techniques of feature extraction. The first part discusses various techniques and second part m akes a detailed appraisal of the major techniques used for feature extraction.

  8. Efficient sparse kernel feature extraction based on partial least squares.

    Science.gov (United States)

    Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John

    2009-08-01

    The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.

  9. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  10. Extracting Conceptual Feature Structures from Text

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Lassen, Tine;

    2011-01-01

    This paper describes an approach to indexing texts by their conceptual content using ontologies along with lexico-syntactic information and semantic role assignment provided by lexical resources. The conceptual content of meaningful chunks of text is transformed into conceptual feature structures...

  11. [RVM supervised feature extraction and Seyfert spectra classification].

    Science.gov (United States)

    Li, Xiang-Ru; Hu, Zhan-Yi; Zhao, Yong-Heng; Li, Xiao-Ming

    2009-06-01

    With recent technological advances in wide field survey astronomy and implementation of several large-scale astronomical survey proposals (e. g. SDSS, 2dF and LAMOST), celestial spectra are becoming very abundant and rich. Therefore, research on automated classification methods based on celestial spectra has been attracting more and more attention in recent years. Feature extraction is a fundamental problem in automated spectral classification, which not only influences the difficulty and complexity of the problem, but also determines the performance of the designed classifying system. The available methods of feature extraction for spectra classification are usually unsupervised, e. g. principal components analysis (PCA), wavelet transform (WT), artificial neural networks (ANN) and Rough Set theory. These methods extract features not by their capability to classify spectra, but by some kind of power to approximate the original celestial spectra. Therefore, the extracted features by these methods usually are not the best ones for classification. In the present work, the authors pointed out the necessary to investigate supervised feature extraction by analyzing the characteristics of the spectra classification research in available literature and the limitations of unsupervised feature extracting methods. And the authors also studied supervised feature extracting based on relevance vector machine (RVM) and its application in Seyfert spectra classification. RVM is a recently introduced method based on Bayesian methodology, automatic relevance determination (ARD), regularization technique and hierarchical priors structure. By this method, the authors can easily fuse the information in training data, the authors' prior knowledge and belief in the problem, etc. And RVM could effectively extract the features and reduce the data based on classifying capability. Extensive experiments show its superior performance in dimensional reduction and feature extraction for Seyfert

  12. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  13. Heuristical Feature Extraction from LIDAR Data and Their Visualization

    Science.gov (United States)

    Ghosh, S.; Lohani, B.

    2011-09-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clusters and simplify them to be sent to a visualization engine.

  14. Topographic Feature Extraction for Bengali and Hindi Character Images

    CERN Document Server

    Bag, Soumen; 10.5121/sipij.2011.2215

    2011-01-01

    Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR) etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West). We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shape-based graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi...

  15. Spoken Language Identification Using Hybrid Feature Extraction Methods

    CERN Document Server

    Kumar, Pawan; Mishra, A N; Chandra, Mahesh

    2010-01-01

    This paper introduces and motivates the use of hybrid robust feature extraction technique for spoken language identification (LID) system. The speech recognizers use a parametric form of a signal to get the most important distinguishable features of speech signal for recognition task. In this paper Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP) along with two hybrid features are used for language Identification. Two hybrid features, Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were obtained from combination of MFCC and PLP. Two different classifiers, Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) were used for classification. The experiment shows better identification rate using hybrid feature extraction techniques compared to conventional feature extraction methods.BFCC has shown better performance than MFCC with both classifiers. RPLP along with GMM has shown be...

  16. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  17. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  18. Fingerprint Identification - Feature Extraction, Matching and Database Search

    NARCIS (Netherlands)

    Bazen, Asker Michiel

    2002-01-01

    Presents an overview of state-of-the-art fingerprint recognition technology for identification and verification purposes. Three principal challenges in fingerprint recognition are identified: extracting robust features from low-quality fingerprints, matching elastically deformed fingerprints and eff

  19. ECG Electrocardiogram (For Parents)

    Science.gov (United States)

    ... Old Feeding Your 1- to 2-Year-Old ECG (Electrocardiogram) KidsHealth > For Parents > ECG (Electrocardiogram) Print A ... whether there is any damage. How Is an ECG Done? There is nothing painful about getting an ...

  20. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  1. Convolutional Neural Networks for patient-specific ECG classification.

    Science.gov (United States)

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-01-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB).

  2. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  3. A Human ECG Identification System Based on Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Yi Luo

    2013-05-01

    Full Text Available In this paper, a human electrocardiogram (ECG identification system based on ensemble empirical mode decomposition (EEMD is designed. A robust preprocessing method comprising noise elimination, heartbeat normalization and quality measurement is proposed to eliminate the effects of noise and heart rate variability. The system is independent of the heart rate. The ECG signal is decomposed into a number of intrinsic mode functions (IMFs and Welch spectral analysis is used to extract the significant heartbeat signal features. Principal component analysis is used reduce the dimensionality of the feature space, and the K-nearest neighbors (K-NN method is applied as the classifier tool. The proposed human ECG identification system was tested on standard MIT-BIH ECG databases: the ST change database, the long-term ST database, and the PTB database. The system achieved an identification accuracy of 95% for 90 subjects, demonstrating the effectiveness of the proposed method in terms of accuracy and robustness.

  4. Feature Extraction by Wavelet Decomposition of Surface

    Directory of Open Access Journals (Sweden)

    Prashant Singh

    2010-07-01

    Full Text Available The paper presents a new approach to surface acoustic wave (SAW chemical sensor array design and data processing for recognition of volatile organic compounds (VOCs based on transient responses. The array is constructed of variable thickness single polymer-coated SAW oscillator sensors. The thickness of polymer coatings are selected such that during the sensing period, different sensors are loaded with varied levels of diffusive inflow of vapour species due to different stages of termination of equilibration process. Using a single polymer for coating the individual sensors with different thickness introduces vapour-specific kinetics variability in transient responses. The transient shapes are analysed by wavelet decomposition based on Daubechies mother wavelets. The set of discrete wavelet transform (DWT approximation coefficients across the array transients is taken to represent the vapour sample in two alternate ways. In one, the sets generated by all the transients are combined into a single set to give a single representation to the vapour. In the other, the set of approximation coefficients at each data point generated by all transients is taken to represent the vapour. The latter results in as many alternate representations as there are approximation coefficients. The alternate representations of a vapour sample are treated as different instances or realisations for further processing. The wavelet analysis is then followed by the principal component analysis (PCA to create new feature space. A comparative analysis of the feature spaces created by both the methods leads to the conclusion that both methods yield complimentary information: the one reveals intrinsic data variables, and the other enhances class separability. The present approach is validated by generating synthetic transient response data based on a prototype polyisobutylene (PIB coated 3-element SAW sensor array exposed to 7 VOC vapours: chloroform, chlorobenzene o

  5. Applying Feature Extraction for Classification Problems

    Directory of Open Access Journals (Sweden)

    Foon Chi

    2009-03-01

    Full Text Available With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and the proliferation of cheap, high quality digital cameras it isbecoming ever more desirable to be able to automatically classify images into appropriate categories such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention.However, as with most Artificial Intelligence (A.I. methods it is seen as necessary to take small steps towards your goal. With this in mind a method is proposed here to represent localised features using disjoint sub-images taken from several datasets of retinal images for their eventual use in an incremental learning system. A tile-based localised adaptive threshold selection method was taken for vessel segmentation based on separate colour components. Arteriole-venous differentiation was made possible by using the composite of these components and high quality fundal images. Performance was evaluated on the DRIVE and STARE datasets achieving average specificity of 0.9379 and sensitivity of 0.5924.

  6. Disease Classification and Biomarker Discovery Using ECG Data

    Directory of Open Access Journals (Sweden)

    Rong Huang

    2015-01-01

    Full Text Available In the recent decade, disease classification and biomarker discovery have become increasingly important in modern biological and medical research. ECGs are comparatively low-cost and noninvasive in screening and diagnosing heart diseases. With the development of personal ECG monitors, large amounts of ECGs are recorded and stored; therefore, fast and efficient algorithms are called for to analyze the data and make diagnosis. In this paper, an efficient and easy-to-interpret procedure of cardiac disease classification is developed through novel feature extraction methods and comparison of classifiers. Motivated by the observation that the distributions of various measures on ECGs of the diseased group are often skewed, heavy-tailed, or multimodal, we characterize the distributions by sample quantiles which outperform sample means. Three classifiers are compared in application both to all features and to dimension-reduced features by PCA: stepwise discriminant analysis (SDA, SVM, and LASSO logistic regression. It is found that SDA applied to dimension-reduced features by PCA is the most stable and effective procedure, with sensitivity, specificity, and accuracy being 89.68%, 84.62%, and 88.52%, respectively.

  7. Novel Moment Features Extraction for Recognizing Handwritten Arabic Letters

    Directory of Open Access Journals (Sweden)

    Gheith Abandah

    2009-01-01

    Full Text Available Problem statement: Offline recognition of handwritten Arabic text awaits accurate recognition solutions. Most of the Arabic letters have secondary components that are important in recognizing these letters. However these components have large writing variations. We targeted enhancing the feature extraction stage in recognizing handwritten Arabic text. Approach: In this study, we proposed a novel feature extraction approach of handwritten Arabic letters. Pre-segmented letters were first partitioned into main body and secondary components. Then moment features were extracted from the whole letter as well as from the main body and the secondary components. Using multi-objective genetic algorithm, efficient feature subsets were selected. Finally, various feature subsets were evaluated according to their classification error using an SVM classifier. Results: The proposed approach improved the classification error in all cases studied. For example, the improvements of 20-feature subsets of normalized central moments and Zernike moments were 15 and 10%, respectively. Conclusion/Recommendations: Extracting and selecting statistical features from handwritten Arabic letters, their main bodies and their secondary components provided feature subsets that give higher recognition accuracies compared to the subsets of the whole letters alone.

  8. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voronoi...

  9. EEG signal features extraction based on fractal dimension.

    Science.gov (United States)

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  10. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  11. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  12. Image feature meaning for automatic key-frame extraction

    Science.gov (United States)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  13. Feature extraction with LIDAR data and aerial images

    Science.gov (United States)

    Mao, Jianhua; Liu, Yanjing; Cheng, Penggen; Li, Xianhua; Zeng, Qihong; Xia, Jing

    2006-10-01

    Raw LIDAR data is a irregular spacing 3D point cloud including reflections from bare ground, buildings, vegetation and vehicles etc., and the first task of the data analyses of point cloud is feature extraction. However, the interpretability of LIDAR point cloud is often limited due to the fact that no object information is provided, and the complex earth topography and object morphology make it impossible for a single operator to classify all the point cloud precisely 100%. In this paper, a hierarchy method for feature extraction with LIDAR data and aerial images is discussed. The aerial images provide us information of objects figuration and spatial distribution, and hierarchic classification of features makes it easy to apply automatic filters progressively. And the experiment results show that, using this method, it was possible to detect more object information and get a better result of feature extraction than using automatic filters alone.

  14. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  15. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  16. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    Science.gov (United States)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  17. FPGA-based electrocardiography (ECG signal analysis system using least-square linear phase finite impulse response (FIR filter

    Directory of Open Access Journals (Sweden)

    Mohamed G. Egila

    2016-12-01

    Full Text Available This paper presents a proposed design for analyzing electrocardiography (ECG signals. This methodology employs highpass least-square linear phase Finite Impulse Response (FIR filtering technique to filter out the baseline wander noise embedded in the input ECG signal to the system. Discrete Wavelet Transform (DWT was utilized as a feature extraction methodology to extract the reduced feature set from the input ECG signal. The design uses back propagation neural network classifier to classify the input ECG signal. The system is implemented on Xilinx 3AN-XC3S700AN Field Programming Gate Array (FPGA board. A system simulation has been done. The design is compared with some other designs achieving total accuracy of 97.8%, and achieving reduction in utilizing resources on FPGA implementation.

  18. Fast SIFT design for real-time visual feature extraction.

    Science.gov (United States)

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.

  19. Local features for enhancement and minutiae extraction in fingerprints.

    Science.gov (United States)

    Fronthaler, Hartwig; Kollreider, Klaus; Bigun, Josef

    2008-03-01

    Accurate fingerprint recognition presupposes robust feature extraction which is often hampered by noisy input data. We suggest common techniques for both enhancement and minutiae extraction, employing symmetry features. For enhancement, a Laplacian-like image pyramid is used to decompose the original fingerprint into sub-bands corresponding to different spatial scales. In a further step, contextual smoothing is performed on these pyramid levels, where the corresponding filtering directions stem from the frequency-adapted structure tensor (linear symmetry features). For minutiae extraction, parabolic symmetry is added to the local fingerprint model which allows to accurately detect the position and direction of a minutia simultaneously. Our experiments support the view that using the suggested parabolic symmetry features, the extraction of which does not require explicit thinning or other morphological operations, constitute a robust alternative to conventional minutiae extraction. All necessary image processing is done in the spatial domain using 1-D filters only, avoiding block artifacts that reduce the biometric information. We present comparisons to other studies on enhancement in matching tasks employing the open source matcher from NIST, FIS2. Furthermore, we compare the proposed minutiae extraction method with the corresponding method from the NIST package, mindtct. A top five commercial matcher from FVC2006 is used in enhancement quantification as well. The matching error is lowered significantly when plugging in the suggested methods. The FVC2004 fingerprint database, notable for its exceptionally low-quality fingerprints, is used for all experiments.

  20. Surface Electromyography Feature Extraction Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Farzaneh Akhavan Mahdavi

    2012-12-01

    Full Text Available Considering the vast variety of EMG signal applications such as rehabilitation of people suffering from some mobility limitations, scientists have done much research on EMG control system. In this regard, feature extraction of EMG signal has been highly valued as a significant technique to extract the desired information of EMG signal and remove unnecessary parts. In this study, Wavelet Transform (WT has been applied as the main technique to extract Surface EMG (SEMG features because WT is consistent with the nature of EMG as a nonstationary signal. Furthermore, two evaluation criteria, namely, RES index (the ratio of a Euclidean distance to a standard deviation and scatter plot are recruited to investigate the efficiency of wavelet feature extraction. The results illustrated an improvement in class separability of hand movements in feature space. Accordingly, it has been shown that only the SEMG features extracted from first and second level of WT decomposition by second order of Daubechies family (db2 yielded the best class separability.

  1. THE IDENTIFICATION OF PILL USING FEATURE EXTRACTION IN IMAGE MINING

    Directory of Open Access Journals (Sweden)

    A. Hema

    2015-02-01

    Full Text Available With the help of image mining techniques, an automatic pill identification system was investigated in this study for matching the images of the pills based on its several features like imprint, color, size and shape. Image mining is an inter-disciplinary task requiring expertise from various fields such as computer vision, image retrieval, image matching and pattern recognition. Image mining is the method in which the unusual patterns are detected so that both hidden and useful data images can only be stored in large database. It involves two different approaches for image matching. This research presents a drug identification, registration, detection and matching, Text, color and shape extraction of the image with image mining concept to identify the legal and illegal pills with more accuracy. Initially, the preprocessing process is carried out using novel interpolation algorithm. The main aim of this interpolation algorithm is to reduce the artifacts, blurring and jagged edges introduced during up-sampling. Then the registration process is proposed with two modules they are, feature extraction and corner detection. In feature extraction the noisy high frequency edges are discarded and relevant high frequency edges are selected. The corner detection approach detects the high frequency pixels in the intersection points. Through the overall performance gets improved. There is a need of segregate the dataset into groups based on the query image’s size, shape, color, text, etc. That process of segregating required information is called as feature extraction. The feature extraction is done using Geometrical Gradient feature transformation. Finally, color and shape feature extraction were performed using color histogram and geometrical gradient vector. Simulation results shows that the proposed techniques provide accurate retrieval results both in terms of time and accuracy when compared to conventional approaches.

  2. Combining Multiple Feature Extraction Techniques for Handwritten Devnagari Character Recognition

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present an OCR for Handwritten Devnagari Characters. Basic symbols are recognized by neural classifier. We have used four feature extraction techniques namely, intersection, shadow feature, chain code histogram and straight line fitting features. Shadow features are computed globally for character image while intersection features, chain code histogram features and line fitting features are computed by dividing the character image into different segments. Weighted majority voting technique is used for combining the classification decision obtained from four Multi Layer Perceptron(MLP) based classifier. On experimentation with a dataset of 4900 samples the overall recognition rate observed is 92.80% as we considered top five choices results. This method is compared with other recent methods for Handwritten Devnagari Character Recognition and it has been observed that this approach has better success rate than other methods.

  3. The Research of ECG Signal Automatic Segmentation Algorithm Based on Fractal Dimension Trajectory

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    <正>In this paper a kind of ECG signal automatic segmentation algorithm based on ECG fractal dimension trajectory is put forward.First,the ECG signal will be analyzed,then constructing the fractal dimension trajectory of ECG signal according to the fractal dimension trajectory constructing algorithm,finally,obtaining ECG signal feature points and ECG automatic segmentation will be realized by the feature of ECG signal fractal dimension trajectory and the feature of ECG frequency domain characteristics.Through Matlab simulation of the algorithm,the results showed that by constructing the ECG fractal dimension trajectory enables ECG location of each component displayed clearly and obtains high success rate of sub-ECG,providing a basis to identify the various components of ECG signal accurately.

  4. Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms

    OpenAIRE

    Turroni, Francesco

    2012-01-01

    The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerp...

  5. Towards Home-Made Dictionaries for Musical Feature Extraction

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2003-01-01

    The majority of musical feature extraction applications are based on the Fourier transform in various disguises. This is despite the fact that this transform is subject to a series of restrictions, which admittedly ease the computation and interpretation of transform coefficients, but also imposes...... arguably unnecessary limitations on the ability of the transform to extract and identify features. However, replacing the nicely structured dictionary of the Fourier transform (or indeed other nice transform such as the wavelet transform) with a home-made dictionary is a dangerous task, since even the most...

  6. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  7. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  8. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  9. Feature extraction from multiple data sources using genetic programming.

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  10. Remote Sensing Image Feature Extracting Based Multiple Ant Colonies Cooperation

    Directory of Open Access Journals (Sweden)

    Zhang Zhi-long

    2014-02-01

    Full Text Available This paper presents a novel feature extraction method for remote sensing imagery based on the cooperation of multiple ant colonies. First, multiresolution expression of the input remote sensing imagery is created, and two different ant colonies are spread on different resolution images. The ant colony in the low-resolution image uses phase congruency as the inspiration information, whereas that in the high-resolution image uses gradient magnitude. The two ant colonies cooperate to detect features in the image by sharing the same pheromone matrix. Finally, the image features are extracted on the basis of the pheromone matrix threshold. Because a substantial amount of information in the input image is used as inspiration information of the ant colonies, the proposed method shows higher intelligence and acquires more complete and meaningful image features than those of other simple edge detectors.

  11. Face Feature Extraction for Recognition Using Radon Transform

    Directory of Open Access Journals (Sweden)

    Justice Kwame Appati

    2016-07-01

    Full Text Available Face recognition for some time now has been a challenging exercise especially when it comes to recognizing faces with different pose. This perhaps is due to the use of inappropriate descriptors during the feature extraction stage. In this paper, a thorough examination of the Radon Transform as a face signature descriptor was investigated on one of the standard database. The global features were rather considered by constructing a Gray Level Co-occurrences Matrices (GLCMs. Correlation, Energy, Homogeneity and Contrast are computed from each image to form the feature vector for recognition. We showed that, the transformed face signatures are robust and invariant to the different pose. With the statistical features extracted, face training classes are optimally broken up through the use of Support Vector Machine (SVM whiles recognition rate for test face images are computed based on the L1 norm.

  12. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.

  13. Discriminative tonal feature extraction method in mandarin speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2007-01-01

    To utilize the supra-segmental nature of Mandarin tones, this article proposes a feature extraction method for hidden markov model (HMM) based tone modeling. The method uses linear transforms to project F0 (fundamental frequency) features of neighboring syllables as compensations, and adds them to the original F0 features of the current syllable. The transforms are discriminatively trained by using an objective function termed as "minimum tone error", which is a smooth approximation of tone recognition accuracy. Experiments show that the new tonal features achieve 3.82% tone recognition rate improvement, compared with the baseline, using maximum likelihood trained HMM on the normal F0 features. Further experiments show that discriminative HMM training on the new features is 8.78% better than the baseline.

  14. GFF-Ex: a genome feature extraction package

    OpenAIRE

    Rastogi, Achal; Gupta, Dinesh

    2014-01-01

    Background Genomic features of whole genome sequences emerging from various sequencing and annotation projects are represented and stored in several formats. Amongst these formats, the GFF (Generic/General Feature Format) has emerged as a widely accepted, portable and successfully used flat file format for genome annotation storage. With an increasing interest in genome annotation projects and secondary and meta-analysis, there is a need for efficient tools to extract sequences of interests f...

  15. Data Feature Extraction for High-Rate 3-Phase Data

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  16. TOPOGRAPHIC FEATURE EXTRACTION FOR BENGALI AND HINDI CHARACTER IMAGES

    Directory of Open Access Journals (Sweden)

    Soumen Bag

    2011-06-01

    Full Text Available Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West. We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shapebased graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi character images. Initial results demonstrate the efficacy of our approach.

  17. Topographic Feature Extraction for Bengali and Hindi Character Images

    Directory of Open Access Journals (Sweden)

    Soumen Bag

    2011-09-01

    Full Text Available Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West. We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shapebased graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi character images. Initial results demonstrate the efficacy of our approach.

  18. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  19. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon be

  20. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is te...

  1. Features extraction in anterior and posterior cruciate ligaments analysis.

    Science.gov (United States)

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK).

  2. METHOD TO EXTRACT BLEND SURFACE FEATURE IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    Lü Zhen; Ke Yinglin; Sun Qing; Kelvin W; Huang Xiaoping

    2003-01-01

    A new method of extraction of blend surface feature is presented. It contains two steps: segmentation and recovery of parametric representation of the blend. The segmentation separates the points in the blend region from the rest of the input point cloud with the processes of sampling point data, estimation of local surface curvature properties and comparison of maximum curvature values. The recovery of parametric representation generates a set of profile curves by marching throughout the blend and fitting cylinders. Compared with the existing approaches of blend surface feature extraction, the proposed method reduces the requirement of user interaction and is capable of extracting blend surface with either constant radius or variable radius. Application examples are presented to verify the proposed method.

  3. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  4. Feature extraction from slice data for reverse engineering

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yingjie; LU Shangning

    2007-01-01

    A new approach to feature extraction for slice data points is presented. The reconstruction of objects is performed as follows. First, all contours in each slice are extracted by contour tracing algorithms. Then the data points on the contours are analyzed, and the curve segments of the contours are divided into three categories: straight lines, conic curves and B-spline curves. The curve fitting methods are applied for each curve segment to remove the unwanted points with pre-determined tolerance. Finally, the features, which consist of the object and connection relations among them, are founded by matching the corresponding contours in adjacent slices, and 3D models are reconstructed based on the features. The proposed approach has been implemented in OpenGL, and the feasibility of the proposed method has been verified by several cases.

  5. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... the complexity of hand-crafting feature extractors that combine information across dissimilar modalities of input. Frequent sequence mining is presented as a method to learn feature extractors that fuse physiological and contextual information. This method is evaluated in a game-based dataset and compared...

  6. Features Extraction for Object Detection Based on Interest Point

    Directory of Open Access Journals (Sweden)

    Amin Mohamed Ahsan

    2013-05-01

    Full Text Available In computer vision, object detection is an essential process for further processes such as object tracking, analyzing and so on. In the same context, extraction features play important role to detect the object correctly. In this paper we present a method to extract local features based on interest point which is used to detect key-points within an image, then, compute histogram of gradient (HOG for the region surround that point. Proposed method used speed-up robust feature (SURF method as interest point detector and exclude the descriptor. The new descriptor is computed by using HOG method. The proposed method got advantages of both mentioned methods. To evaluate the proposed method, we used well-known dataset which is Caltech101. The initial result is encouraging in spite of using a small data for training.

  7. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  8. Artificial neural network-based classification of body movements in ambulatory ECG signal.

    Science.gov (United States)

    Darji, Sachin T; Kher, Rahul K

    2013-11-01

    Abstract Ambulatory ECG monitoring provides electrical activity of the heart when a person is involved in doing normal routine activities. Thus, the recorded ECG signal consists of cardiac signal along with motion artifacts introduced due to a person's body movements during routine activities. Detection of motion artifacts due to different physical activities might help in further cardiac diagnosis. Ambulatory ECG signal analysis for detection of various motion artifacts using adaptive filtering approach is addressed in this paper. We have used BIOPAC MP 36 system for acquiring ECG signal. The ECG signals of five healthy subjects (aged between 22-30 years) were recorded while the person performed various body movements like up and down movement of the left hand, up and down movement of the right hand, waist twisting movement while standing and change from sitting down on a chair to standing up movement in lead I configuration. An adaptive filter-based approach has been used to extract the motion artifact component from the ambulatory ECG signal. The features of motion artifact signal, extracted using Gabor transform, have been used to train the artificial neural network (ANN) for classifying body movements.

  9. Feature Extraction and Selection From the Perspective of Explosive Detection

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used

  10. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  11. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  12. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  13. FACE RECOGNITION USING FEATURE EXTRACTION AND NEURO-FUZZY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Ritesh Vyas

    2012-09-01

    Full Text Available Face is a primary focus of attention in social intercourse, playing a major role in conveying identity and emotion. The human ability to recognize faces is remarkable. People can recognize thousands of faces learned throughout their lifetime and identify familiar faces at a glance even after years of separation. This skill is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses, beards or changes in hair style. In this work, a system is designed to recognize human faces depending on their facial features. Also to reveal the outline of the face, eyes and nose, edge detection technique has been used. Facial features are extracted in the form of distance between important feature points. After normalization, these feature vectors are learned by artificial neural network and used to recognize facial image.

  14. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  15. ECG De-noising

    DEFF Research Database (Denmark)

    Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan

    2015-01-01

    Electrocardiogram (ECG) is a widely used noninvasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper...... proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares...... their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings...

  16. Optimized Feature Extraction for Temperature-Modulated Gas Sensors

    Directory of Open Access Journals (Sweden)

    Alexander Vergara

    2009-01-01

    Full Text Available One of the most serious limitations to the practical utilization of solid-state gas sensors is the drift of their signal. Even if drift is rooted in the chemical and physical processes occurring in the sensor, improved signal processing is generally considered as a methodology to increase sensors stability. Several studies evidenced the augmented stability of time variable signals elicited by the modulation of either the gas concentration or the operating temperature. Furthermore, when time-variable signals are used, the extraction of features can be accomplished in shorter time with respect to the time necessary to calculate the usual features defined in steady-state conditions. In this paper, we discuss the stability properties of distinct dynamic features using an array of metal oxide semiconductors gas sensors whose working temperature is modulated with optimized multisinusoidal signals. Experiments were aimed at measuring the dispersion of sensors features in repeated sequences of a limited number of experimental conditions. Results evidenced that the features extracted during the temperature modulation reduce the multidimensional data dispersion among repeated measurements. In particular, the Energy Signal Vector provided an almost constant classification rate along the time with respect to the temperature modulation.

  17. Gradient Algorithm on Stiefel Manifold and Application in Feature Extraction

    Directory of Open Access Journals (Sweden)

    Zhang Jian-jun

    2013-09-01

    Full Text Available To improve the computational efficiency of system feature extraction, reduce the occupied memory space, and simplify the program design, a modified gradient descent method on Stiefel manifold is proposed based on the optimization algorithm of geometry frame on the Riemann manifold. Different geodesic calculation formulas are used for different scenarios. A polynomial is also used to lie close to the geodesic equations. JiuZhaoQin-Horner polynomial algorithm and the strategies of line-searching technique and change of the step size of iteration are also adopted. The gradient descent algorithm on Stiefel manifold applied in Principal Component Analysis (PCA is discussed in detail as an example of system feature extraction. Theoretical analysis and simulation experiments show that the new method can achieve superior performance in both the convergence rate and calculation efficiency while ensuring the unitary column orthogonality. In addition, it is easier to implement by software or hardware.

  18. A Review on Feature Extraction Techniques in Face Recognition

    Directory of Open Access Journals (Sweden)

    Rahimeh Rouhi

    2013-01-01

    Full Text Available Face recognition systems due to their significant application in the security scopes, have been of greatimportance in recent years. The existence of an exact balance between the computing cost, robustness andtheir ability for face recognition is an important characteristic for such systems. Besides, trying to designthe systems performing under different conditions (e.g. illumination, variation of pose, different expressionand etc. is a challenging problem in the feature extraction of the face recognition. As feature extraction isan important step in the face recognition operation, in the present study four techniques of featureextraction in the face recognition were reviewed, subsequently comparable results were presented, andthen the advantages and the disadvantages of these methods were discussed.

  19. Modification of evidence theory based on feature extraction

    Institute of Scientific and Technical Information of China (English)

    DU Feng; SHI Wen-kang; DENG Yong

    2005-01-01

    Although evidence theory has been widely used in information fusion due to its effectiveness of uncertainty reasoning, the classical DS evidence theory involves counter-intuitive behaviors when high conflict information exists. Many modification methods have been developed which can be classified into the following two kinds of ideas, either modifying the combination rules or modifying the evidence sources. In order to make the modification more reasonable and more effective, this paper gives a thorough analysis of some typical existing modification methods firstly, and then extracts the intrinsic feature of the evidence sources by using evidence distance theory. Based on the extracted features, two modified plans of evidence theory according to the corresponding modification ideas have been proposed. The results of numerical examples prove the good performance of the plans when combining evidence sources with high conflict information.

  20. Ecg Monitoring Using Android Smart App

    Directory of Open Access Journals (Sweden)

    Pooja Pawar

    2016-04-01

    Full Text Available This paper describes a mixed-signal ECG system that is capable of implementing configurable functionality with low-power consumption for portable ECG monitoring applications. A low-voltage and high performance analog front-end extracts 3-channel ECG signals and single channelelectrode-tissue-impedance (ETI measurement with high signalquality. Design effective and low cost solution for ECG machine. . Wave forms of ECG can be observed on Android smartphones. Its availability and cost is significant and affordable. That makes this system upgradable and effective for every class of people.. Our system is portable so anyone can handle this in a simple way with android based smartphone. It doesn’t cost much. It reduces work, efforts and expenses for patients and their relatives.

  1. FEATURES AND GROUND AUTOMATIC EXTRACTION FROM AIRBORNE LIDAR DATA

    OpenAIRE

    D. Costantino; M. G. Angelini

    2012-01-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and l...

  2. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  3. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  4. Eddy current pulsed phase thermography and feature extraction

    Science.gov (United States)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  5. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    Science.gov (United States)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  6. Features and Ground Automatic Extraction from Airborne LIDAR Data

    Science.gov (United States)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  7. Automated feature extraction for 3-dimensional point clouds

    Science.gov (United States)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  8. Feature Extraction and Pattern Identification for Anemometer Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Longji Sun

    2012-01-01

    Full Text Available Cup anemometers are commonly used for wind speed measurement in the wind industry. Anemometer malfunctions lead to excessive errors in measurement and directly influence the wind energy development for a proposed wind farm site. This paper is focused on feature extraction and pattern identification to solve the anemometer condition diagnosis problem of the PHM 2011 Data Challenge Competition. Since the accuracy of anemometers can be severely affected by the environmental factors such as icing and the tubular tower itself, in order to distinguish the cause due to anemometer failures from these factors, our methodologies start with eliminating irregular data (outliers under the influence of environmental factors. For paired data, the relation between the relative wind speed difference and the wind direction is extracted as an important feature to reflect normal or abnormal behaviors of paired anemometers. Decisions regarding the condition of paired anemometers are made by comparing the features extracted from training and test data. For shear data, a power law model is fitted using the preprocessed and normalized data, and the sum of the squared residuals (SSR is used to measure the health of an array of anemometers. Decisions are made by comparing the SSRs of training and test data. The performance of our proposed methods is evaluated through the competition website. As a final result, our team ranked the second place overall in both student and professional categories in this competition.

  9. Motion feature extraction scheme for content-based video retrieval

    Science.gov (United States)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  10. ECG based Atrial Fibrillation detection using Sequency Ordered Complex Hadamard Transform and Hybrid Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Padmavathi Kora

    2017-06-01

    Full Text Available Electrocardiogram (ECG, a non-invasive diagnostic technique, used for detecting cardiac arrhythmia. From last decade industry dealing with biomedical instrumentation and research, demanding an advancement in its ability to distinguish different cardiac arrhythmia. Atrial Fibrillation (AF is an irregular rhythm of the human heart. During AF, the atrial moments are quicker than the normal rate. As blood is not completely ejected out of atria, chances for the formation of blood clots in atrium. These abnormalities in the heart can be identified by the changes in the morphology of the ECG. The first step in the detection of AF is preprocessing of ECG, which removes noise using filters. Feature extraction is the next key process in this research. Recent feature extraction methods, such as Auto Regressive (AR modeling, Magnitude Squared Coherence (MSC and Wavelet Coherence (WTC using standard database (MIT-BIH, yielded a lot of features. Many of these features might be insignificant containing some redundant and non-discriminatory features that introduce computational burden and loss of performance. This paper presents fast Conjugate Symmetric Sequency Ordered Complex Hadamard Transform (CS-SCHT for extracting relevant features from the ECG signal. The sparse matrix factorization method is used for developing fast and efficient CS-SCHT algorithm and its computational performance is examined and compared to that of the HT and NCHT. The applications of the CS-SCHT in the ECG-based AF detection is also discussed. These fast CS-SCHT features are optimized using Hybrid Firefly and Particle Swarm Optimization (FFPSO to increase the performance of the classifier.

  11. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  12. A Novel Feature Extraction for Robust EMG Pattern Recognition

    CERN Document Server

    Phinyomark, Angkoon; Phukpattaranont, Pornchai

    2009-01-01

    Varieties of noises are major problem in recognition of Electromyography (EMG) signal. Hence, methods to remove noise become most significant in EMG signal analysis. White Gaussian noise (WGN) is used to represent interference in this paper. Generally, WGN is difficult to be removed using typical filtering and solutions to remove WGN are limited. In addition, noise removal is an important step before performing feature extraction, which is used in EMG-based recognition. This research is aimed to present a novel feature that tolerate with WGN. As a result, noise removal algorithm is not needed. Two novel mean and median frequencies (MMNF and MMDF) are presented for robust feature extraction. Sixteen existing features and two novelties are evaluated in a noisy environment. WGN with various signal-to-noise ratios (SNRs), i.e. 20-0 dB, was added to the original EMG signal. The results showed that MMNF performed very well especially in weak EMG signal compared with others. The error of MMNF in weak EMG signal with...

  13. Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Wei-Cheng Li; Chung-Lin Huang; Pei-Yeh Chang

    2014-01-01

    In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant’s cranium can be identified by local feature analysis and a two-phase k-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress.

  14. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  15. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2014-06-01

    Full Text Available User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user’s location (sensitivity and its capacity to detect false positives (specificity in both scenarios.

  16. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    Science.gov (United States)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  17. Smartphone home monitoring of ECG

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Moon, Gyu; Landa, Joseph; Nakajima, Hiroshi; Hata, Yutaka

    2012-06-01

    A system of ambulatory, halter, electrocardiography (ECG) monitoring system has already been commercially available for recording and transmitting heartbeats data by the Internet. However, it enjoys the confidence with a reservation and thus a limited market penetration, our system was targeting at aging global villagers having an increasingly biomedical wellness (BMW) homecare needs, not hospital related BMI (biomedical illness). It was designed within SWaP-C (Size, Weight, and Power, Cost) using 3 innovative modules: (i) Smart Electrode (lowpower mixed signal embedded with modern compressive sensing and nanotechnology to improve the electrodes' contact impedance); (ii) Learnable Database (in terms of adaptive wavelets transform QRST feature extraction, Sequential Query Relational database allowing home care monitoring retrievable Aided Target Recognition); (iii) Smartphone (touch screen interface, powerful computation capability, caretaker reporting with GPI, ID, and patient panic button for programmable emergence procedure). It can provide a supplementary home screening system for the post or the pre-diagnosis care at home with a build-in database searchable with the time, the place, and the degree of urgency happened, using in-situ screening.

  18. Real-time CHF detection from ECG signals using a novel discretization method.

    Science.gov (United States)

    Orhan, Umut

    2013-10-01

    This study proposes a new method, equal frequency in amplitude and equal width in time (EFiA-EWiT) discretization, to discriminate between congestive heart failure (CHF) and normal sinus rhythm (NSR) patterns in ECG signals. The ECG unit pattern concept was introduced to represent the standard RR interval, and our method extracted certain features from the unit patterns to classify by a primitive classifier. The proposed method was tested on two classification experiments by using ECG records in Physiobank databases and the results were compared to those from several previous studies. In the first experiment, an off-line classification was performed with unit patterns selected from long ECG segments. The method was also used to detect CHF by real-time ECG waveform analysis. In addition to demonstrating the success of the proposed method, the results showed that some unit patterns in a long ECG segment from a heart patient were more suggestive of disease than the others. These results indicate that the proposed approach merits additional research.

  19. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  20. Support vector machines for automated recognition of obstructive sleep apnea syndrome from ECG recordings.

    Science.gov (United States)

    Khandoker, Ahsan H; Palaniswami, Marimuthu; Karmakar, Chandan K

    2009-01-01

    Obstructive sleep apnea syndrome (OSAS) is associated with cardiovascular morbidity as well as excessive daytime sleepiness and poor quality of life. In this study, we apply a machine learning technique [support vector machines (SVMs)] for automated recognition of OSAS types from their nocturnal ECG recordings. A total of 125 sets of nocturnal ECG recordings acquired from normal subjects (OSAS - ) and subjects with OSAS (OSAS +), each of approximately 8 h in duration, were analyzed. Features extracted from successive wavelet coefficient levels after wavelet decomposition of signals due to heart rate variability (HRV) from RR intervals and ECG-derived respiration (EDR) from R waves of QRS amplitudes were used as inputs to the SVMs to recognize OSAS +/- subjects. Using leave-one-out technique, the maximum accuracy of classification for 83 training sets was found to be 100% for SVMs using a subset of selected combination of HRV and EDR features. Independent test results on 42 subjects showed that it correctly recognized 24 out of 26 OSAS + subjects and 15 out of 16 OSAS - subjects (accuracy = 92.85%; Cohen's kappa value of 0.85). For estimating the relative severity of OSAS, the posterior probabilities of SVM outputs were calculated and compared with respective apnea/hypopnea index. These results suggest superior performance of SVMs in OSAS recognition supported by wavelet-based features of ECG. The results demonstrate considerable potential in applying SVMs in an ECG-based screening device that can aid a sleep specialist in the initial assessment of patients with suspected OSAS.

  1. Entropy Analysis as an Electroencephalogram Feature Extraction Method

    Directory of Open Access Journals (Sweden)

    P. I. Sotnikov

    2014-01-01

    Full Text Available The aim of this study was to evaluate a possibility for using an entropy analysis as an electroencephalogram (EEG feature extraction method in brain-computer interfaces (BCI. The first section of the article describes the proposed algorithm based on the characteristic features calculation using the Shannon entropy analysis. The second section discusses issues of the classifier development for the EEG records. We use a support vector machine (SVM as a classifier. The third section describes the test data. Further, we estimate an efficiency of the considered feature extraction method to compare it with a number of other methods. These methods include: evaluation of signal variance; estimation of spectral power density (PSD; estimation of autoregression model parameters; signal analysis using the continuous wavelet transform; construction of common spatial pattern (CSP filter. As a measure of efficiency we use the probability value of correctly recognized types of imagery movements. At the last stage we evaluate the impact of EEG signal preprocessing methods on the final classification accuracy. Finally, it concludes that the entropy analysis has good prospects in BCI applications.

  2. A multi-approach feature extractions for iris recognition

    Science.gov (United States)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  3. Data Clustering Analysis Based on Wavelet Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    QIANYuntao; TANGYuanyan

    2003-01-01

    A novel wavelet-based data clustering method is presented in this paper, which includes wavelet feature extraction and cluster growing algorithm. Wavelet transform can provide rich and diversified information for representing the global and local inherent structures of dataset. therefore, it is a very powerful tool for clustering feature extraction. As an unsupervised classification, the target of clustering analysis is dependent on the specific clustering criteria. Several criteria that should be con-sidered for general-purpose clustering algorithm are pro-posed. And the cluster growing algorithm is also con-structed to connect clustering criteria with wavelet fea-tures. Compared with other popular clustering methods,our clustering approach provides multi-resolution cluster-ing results,needs few prior parameters, correctly deals with irregularly shaped clusters, and is insensitive to noises and outliers. As this wavelet-based clustering method isaimed at solving two-dimensional data clustering prob-lem, for high-dimensional datasets, self-organizing mapand U-matrlx method are applied to transform them intotwo-dimensional Euclidean space, so that high-dimensional data clustering analysis,Results on some sim-ulated data and standard test data are reported to illus-trate the power of our method.

  4. Nonlinear filtering in ECG Signal Enhancement

    Directory of Open Access Journals (Sweden)

    N. Siddiah

    2012-02-01

    Full Text Available High resolution ECG signals are needed in measuring cardiac abnormalities analysis. Generally baseline wander is one of the important artifact occurred in ECG signal extraction, this strongly affects the signal quality. In order to facilitate proper diagnosis these artifacts have to be removed. In this paper various non linear, non adaptive filtering techniques are presented for the removal of baseline wander removal from ECG signals. The performance characteristics of various filtering techniques are measured in terms of signal to noise ratio.

  5. A Novel Feature Cloud Visualization for Depiction of Product Features Extracted from Customer Reviews

    Directory of Open Access Journals (Sweden)

    Tanvir Ahmad

    2013-09-01

    Full Text Available There has been an exponential growth of web content on the World Wide Web and online users contributing to majority of the unstructured data which also contain a good amount of information on many different subjects that may range from products, news, programmes and services. Many a times other users read these reviews and try to find the meaning of the sentences expressed by the reviewers. Since the number and the length of the reviews are so large that most the times the user will read a few reviews and would like to take an informed decision on the subject that is being talked about. Many different methods have been adopted by websites like numerical rating, star rating, percentage rating etc. However, these methods fail to give information on the explicit features of the product and their overall weight when taking the product in totality. In this paper, a framework has been presented which first calculates the weight of the features depending on the user satisfaction or dissatisfaction expressed on individual features and further a feature cloud visualization has been proposed which uses two level of specificity where the first level lists the extracted features and the second level shows the opinions on those features. A font generation function has been applied which calculates the font size depending on the importance of the features vis-a-vis with the opinion expressed on them.

  6. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  7. Features extraction from the electrocatalytic gas sensor responses

    Science.gov (United States)

    Kalinowski, Paweł; Woźniak, Łukasz; Stachowiak, Maria; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    One of the types of gas sensors used for detection and identification of toxic-air pollutant is an electro-catalytic gas sensor. The electro-catalytic sensors are working in cyclic voltammetry mode, enable detection of various gases. Their response are in the form of I-V curves which contain information about the type and the concentration of measured volatile compound. However, additional analysis is required to provide the efficient recognition of the target gas. Multivariate data analysis and pattern recognition methods are proven to be useful tool for such application, but further investigations on the improvement of the sensor's responses processing are required. In this article the method for extraction of the parameters from the electro-catalytic sensor responses is presented. Extracted features enable the significant reduction of data dimension without the loss of the efficiency of recognition of four volatile air-pollutant, namely nitrogen dioxide, ammonia, hydrogen sulfide and sulfur dioxide.

  8. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  9. Extract relevant features from DEM for groundwater potential mapping

    Science.gov (United States)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  10. Feature Extraction from Subband Brain Signals and Its Classification

    Science.gov (United States)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  11. Feature Extraction and Analysis of Breast Cancer Specimen

    Science.gov (United States)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  12. Point features extraction: towards slam for an autonomous underwater vehicle

    CSIR Research Space (South Africa)

    Matsebe, O

    2010-07-01

    Full Text Available Page 1 of 11 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa POINT FEATURES EXTRACTION: TOWARDS SLAM FOR AN AUTONOMOUS UNDERWATER VEHICLE O. Matsebe1,2, M... Page 2 of 11 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa vehicle is equipped with a Mechanically Scanned Imaging Sonar (Micron DST Sonar) which is able...

  13. Ensemble Feature Extraction Modules for Improved Hindi Speech Recognition System

    Directory of Open Access Journals (Sweden)

    Malay Kumar

    2012-05-01

    Full Text Available Speech is the most natural way of communication between human beings. The field of speech recognition generates intrigues of man - machine conversation and due to its versatile applications; automatic speech recognition systems have been designed. In this paper we are presenting a novel approach for Hindi speech recognition by ensemble feature extraction modules of ASR systems and their outputs have been combined using voting technique ROVER. Experimental results have been shown that proposed system will produce better result than traditional ASR systems.

  14. Efficient ECG signal analysis using wavelet technique for arrhythmia detection: an ANFIS approach

    Science.gov (United States)

    Khandait, P. D.; Bawane, N. G.; Limaye, S. S.

    2010-02-01

    This paper deals with improved ECG signal analysis using Wavelet Transform Techniques and employing subsequent modified feature extraction for Arrhythmia detection based on Neuro-Fuzzy technique. This improvement is based on suitable choice of features in evaluating and predicting life threatening Ventricular Arrhythmia . Analyzing electrocardiographic signals (ECG) includes not only inspection of P, QRS and T waves, but also the causal relations they have and the temporal sequences they build within long observation periods. Wavelet-transform is used for effective feature extraction and Adaptive Neuro-Fuzzy Inference System (ANFIS) is considered for the classifier model. In a first step, QRS complexes are detected. Then, each QRS is delineated by detecting and identifying the peaks of the individual waves, as well as the complex onset and end. Finally, the determination of P and T wave peaks, onsets and ends is performed. We evaluated the algorithm on several manually annotated databases, such as MIT-BIH Arrhythmia and CSE databases, developed for validation purposes. Features based on the ECG waveform shape and heart beat intervals are used as inputs to the classifiers. The performance of the ANFIS model is evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the ECG signals. Cross validation is used to measure the classifier performance. A testing classification accuracy of 95.13% is achieved which is a significant improvement.

  15. New learning subspace method for image feature extraction

    Institute of Scientific and Technical Information of China (English)

    CAO Jian-hai; LI Long; LU Chang-hou

    2006-01-01

    A new method of Windows Minimum/Maximum Module Learning Subspace Algorithm(WMMLSA) for image feature extraction is presented. The WMMLSM is insensitive to the order of the training samples and can regulate effectively the radical vectors of an image feature subspace through selecting the study samples for subspace iterative learning algorithm,so it can improve the robustness and generalization capacity of a pattern subspace and enhance the recognition rate of a classifier. At the same time,a pattern subspace is built by the PCA method. The classifier based on WMMLSM is successfully applied to recognize the pressed characters on the gray-scale images. The results indicate that the correct recognition rate on WMMLSM is higher than that on Average Learning Subspace Method,and that the training speed and the classification speed are both improved. The new method is more applicable and efficient.

  16. Reaction Decoder Tool (RDT): extracting features from chemical reactions

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W.; Holliday, Gemma L.; Steinbeck, Christoph; Thornton, Janet M.

    2016-01-01

    Summary: Extracting chemical features like Atom–Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. Availability and implementation: This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder Contact: asad@ebi.ac.uk or s9asad@gmail.com PMID:27153692

  17. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  18. Graph-driven features extraction from microarray data

    CERN Document Server

    Vert, J P; Vert, Jean-Philippe; Kanehisa, Minoru

    2002-01-01

    Gene function prediction from microarray data is a first step toward better understanding the machinery of the cell from relatively cheap and easy-to-produce data. In this paper we investigate whether the knowledge of many metabolic pathways and their catalyzing enzymes accumulated over the years can help improve the performance of classifiers for this problem. The complex network of known biochemical reactions in the cell results in a representation where genes are nodes of a graph. Formulating the problem as a graph-driven features extraction problem, based on the simple idea that relevant features are likely to exhibit correlation with respect to the topology of the graph, we end up with an algorithm which involves encoding the network and the set of expression profiles into kernel functions, and performing a regularized form of canonical correlation analysis in the corresponding reproducible kernel Hilbert spaces. Function prediction experiments for the genes of the yeast S. Cerevisiae validate this appro...

  19. Texture Feature Extraction and Classification for Iris Diagnosis

    Science.gov (United States)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  20. Road marking features extraction using the VIAPIX® system

    Science.gov (United States)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  1. Extraction of sandy bedforms features through geodesic morphometry

    Science.gov (United States)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  2. GPU Accelerated Automated Feature Extraction From Satellite Images

    Directory of Open Access Journals (Sweden)

    K. Phani Tejaswi

    2013-04-01

    Full Text Available The availability of large volumes of remote sensing data insists on higher degree of automation in featureextraction, making it a need of thehour. Fusingdata from multiple sources, such as panchromatic,hyperspectraland LiDAR sensors, enhances the probability of identifying and extracting features such asbuildings, vegetation or bodies of water by using a combination of spectral and elevation characteristics.Utilizing theaforementioned featuresin remote sensing is impracticable in the absence ofautomation.Whileefforts are underway to reduce human intervention in data processing, this attempt alone may notsuffice. Thehuge quantum of data that needs to be processed entailsaccelerated processing to be enabled.GPUs, which were originally designed to provide efficient visualization,arebeing massively employed forcomputation intensive parallel processing environments. Image processing in general and hence automatedfeatureextraction, is highly computation intensive, where performance improvements have a direct impacton societal needs. In this context, an algorithm has been formulated for automated feature extraction froma panchromatic or multispectral image based on image processing techniques.Two Laplacian of Guassian(LoGmasks were applied on the image individually followed by detection of zero crossing points andextracting the pixels based on their standard deviationwiththe surrounding pixels. The two extractedimages with different LoG masks were combined together which resulted in an image withthe extractedfeatures and edges.Finally the user is at liberty to apply the image smoothing step depending on the noisecontent in the extracted image.The image ispassed through a hybrid median filter toremove the salt andpepper noise from the image.This paper discusses theaforesaidalgorithmforautomated featureextraction, necessity of deployment of GPUs for thesame;system-level challenges and quantifies thebenefits of integrating GPUs in such environment. The

  3. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  4. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  5. Design and implementation of a multiband digital filter using FPGA to extract the ECG signal in the presence of different interference signals.

    Science.gov (United States)

    Aboutabikh, Kamal; Aboukerdah, Nader

    2015-07-01

    In this paper, we propose a practical way to synthesize and filter an ECG signal in the presence of four types of interference signals: (1) those arising from power networks with a fundamental frequency of 50Hz, (2) those arising from respiration, having a frequency range from 0.05 to 0.5Hz, (3) muscle signals with a frequency of 25Hz, and (4) white noise present within the ECG signal band. This was done by implementing a multiband digital filter (seven bands) of type FIR Multiband Least Squares using a digital programmable device (Cyclone II EP2C70F896C6 FPGA, Altera), which was placed on an education and development board (DE2-70, Terasic). This filter was designed using the VHDL language in the Quartus II 9.1 design environment. The proposed method depends on Direct Digital Frequency Synthesizers (DDFS) designed to synthesize the ECG signal and various interference signals. So that the synthetic ECG specifications would be closer to actual ECG signals after filtering, we designed in a single multiband digital filter instead of using three separate digital filters LPF, HPF, BSF. Thus all interference signals were removed with a single digital filter. The multiband digital filter results were studied using a digital oscilloscope to characterize input and output signals in the presence of differing sinusoidal interference signals and white noise.

  6. Real-time hypothesis driven feature extraction on parallel processing architectures

    DEFF Research Database (Denmark)

    Granmo, O.-C.; Jensen, Finn Verner

    2002-01-01

    Feature extraction in content-based indexing of media streams is often computational intensive. Typically, a parallel processing architecture is necessary for real-time performance when extracting features brute force. On the other hand, Bayesian network based systems for hypothesis driven feature......, rather than one-by-one. Thereby, the advantages of parallel feature extraction can be combined with the advantages of hypothesis driven feature extraction. The technique is based on a sequential backward feature set search and a correlation based feature set evaluation function. In order to reduce...

  7. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  8. Analyzing edge detection techniques for feature extraction in dental radiographs

    Directory of Open Access Journals (Sweden)

    Kanika Lakhani

    2016-09-01

    Full Text Available Several dental problems can be detected using radiographs but the main issue with radiographs is that they are not very prominent. In this paper, two well known edge detection techniques have been implemented for a set of 20 radiographs and number of pixels in each image has been calculated. Further, Gaussian filter has been applied over the images to smoothen the images so as to highlight the defect in the tooth. If the images data are available in the form of pixels for both healthy and decayed tooth, the images can easily be compared using edge detection techniques and the diagnosis is much easier. Further, Laplacian edge detection technique is applied to sharpen the edges of the given image. The aim is to detect discontinuities in dental radiographs when compared to original healthy tooth. Future work includes the feature extraction on the images for the classification of dental problems.

  9. Research on Feature Extraction of Remnant Particles of Aerospace Relays

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The existence of remnant particles, which significantly reduce the reliability of relays, is a serious problem for aerospace relays.The traditional method for detecting remnant particles-particle impact noise detection (PIND)-can be used merely to detect the existence of the particle; it is not able to provide any information about the particles' material. However, information on the material of the particles is very helpful for analyzing the causes of remnants. By analyzing the output acoustic signals from a PIND tester, this paper proposes three feature extraction methods: unit energy average pulse durative time, shape parameter of signal power spectral density(PSD), and pulse linear predictive coding coefficient sequence. These methods allow identified remnants to be classified into four categories based on their material. Furthermore, we prove the validity of this new method by processing PIND signals from actual tests.

  10. Transmission line icing prediction based on DWT feature extraction

    Science.gov (United States)

    Ma, T. N.; Niu, D. X.; Huang, Y. L.

    2016-08-01

    Transmission line icing prediction is the premise of ensuring the safe operation of the network as well as the very important basis for the prevention of freezing disasters. In order to improve the prediction accuracy of icing, a transmission line icing prediction model based on discrete wavelet transform (DWT) feature extraction was built. In this method, a group of high and low frequency signals were obtained by DWT decomposition, and were fitted and predicted by using partial least squares regression model (PLS) and wavelet least square support vector model (w-LSSVM). Finally, the final result of the icing prediction was obtained by adding the predicted values of the high and low frequency signals. The results showed that the method is effective and feasible in the prediction of transmission line icing.

  11. New feature extraction in gene expression data for tumor classification

    Institute of Scientific and Technical Information of China (English)

    HE Renya; CHENG Qiansheng; WU Lianwen; YUAN Kehong

    2005-01-01

    Using gene expression data to discriminate tumor from the normal ones is a powerful method. However, it is sometimes difficult because the gene expression data are in high dimension and the object number of the data sets is very small. The key technique is to find a new gene expression profiling that can provide understanding and insight into tumor related cellular processes. In this paper, we propose a new feature extraction method based on variance to the center of the class and employ the support vector machine to recognize the gene data either normal or tumor. Two tumor data sets are used to demonstrate the effectiveness of our methods. The results show that the performance has been significantly improved.

  12. Online feature extraction for the PANDA electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Guliyev, Elmaddin; Tambave, Ganesh; Kavatsyuk, Myroslav; Loehner, Herbert [KVI, University of Groningen (Netherlands); Collaboration: PANDA-Collaboration

    2011-07-01

    Resonances in the charmonium mass region will be studied in antiproton annihilations at FAIR with the multi-purpose PANDA spectrometer providing measurements of electromagnetic signals in a wide dynamic range. The Sampling ADC (SADC) readout of the Electromagnetic Calorimeter (EMC) will allow to realize online hit-detection on the single-channel level and to derive time and energy information. A digital filtering and feature-extraction algorithm was developed and implemented in VHDL code for the online application in a commercial SADC. We discuss the readout scheme, the program logic, the precise signal amplitude detection with phase correction at low sampling frequencies, and the usage of a double moving-window deconvolution filter for the pulse-shape restoration. Such double filtering allows to operate the EMC at much higher rates and to minimize the amount of pile-up events.

  13. PCA Fault Feature Extraction in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2010-08-01

    Full Text Available Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc. may change significantly. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Therefore, utilizing real-time measurements of phasor measurement unit, based on principal components analysis technology, we have extracted successfully the distinct features of fault component. Of course, because of the complexity of different types of faults in electric power system, there still exists enormous problems need a close and intensive study.

  14. FEATURE EXTRACTION OF BONES AND SKIN BASED ON ULTRASONIC SCANNING

    Institute of Scientific and Technical Information of China (English)

    Zheng Shuxian; Zhao Wanhua; Lu Bingheng; Zhao Zhao

    2005-01-01

    In the prosthetic socket design, aimed at the high cost and radiation deficiency caused by CT scanning which is a routine technique to obtain the cross-sectional image of the residual limb, a new ultrasonic scanning method is developed to acquire the bones and skin contours of the residual limb. Using a pig fore-leg as the scanning object, an overlapping algorithm is designed to reconstruct the 2D cross-sectional image, the contours of the bone and skin are extracted using edge detection algorithm and the 3D model of the pig fore-leg is reconstructed by using reverse engineering technology. The results of checking the accuracy of the image by scanning a cylinder work pieces show that the extracted contours of the cylinder are quite close to the standard circumference. So it is feasible to get the contours of bones and skin by ultrasonic scanning. The ultrasonic scanning system featuring no radiation and low cost is a kind of new means of cross section scanning for medical images.

  15. Extraction of Facial Feature Points Using Cumulative Histogram

    CERN Document Server

    Paul, Sushil Kumar; Bouakaz, Saida

    2012-01-01

    This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nos...

  16. Texture features analysis for coastline extraction in remotely sensed images

    Science.gov (United States)

    De Laurentiis, Raimondo; Dellepiane, Silvana G.; Bo, Giancarlo

    2002-01-01

    The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.

  17. Pomegranate peel and peel extracts: chemistry and food features.

    Science.gov (United States)

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  18. A Novel Mobile ECG Telemonitoring System.

    Science.gov (United States)

    Wu, Baoming; Zhuo, Yu; Zhu, Xinjian; Yan, Qingguang; Zhu, Lingyun; Li, Gang

    2005-01-01

    This paper introduces a novel mobile ECG telemonitoring system. By means of CDMA1x (GPSOne) mobile telecommunication network, the system can perform "full time and space" monitoring of human ECG signal, and once the signal of the monitored subject departed from its normal ranges, the hospital ECG monitoring center can further localize his/her geographical position and give rescue in the first time. Another feature of the system is its high anti-interference capability. In order to reduce 50Hz and RF interferences during mobile monitoring, which are usual much serious than conventional hospital monitoring, a new active recording technology was proposed and an active ECG recording electrode was designed. The system has passed the clinical test and used in China.

  19. Electrocardiogram (ECG) pattern modeling and recognition via deterministic learning

    Institute of Scientific and Technical Information of China (English)

    Xunde DONG; Cong WANG; Junmin HU; Shanxing OU

    2014-01-01

    A method for electrocardiogram (ECG) pattern modeling and recognition via deterministic learning theory is presented in this paper. Instead of recognizing ECG signals beat-to-beat, each ECG signal which contains a number of heartbeats is recognized. The method is based entirely on the temporal features (i.e., the dynamics) of ECG patterns, which contains complete information of ECG patterns. A dynamical model is employed to demonstrate the method, which is capable of generating synthetic ECG signals. Based on the dynamical model, the method is shown in the following two phases:the identification (training) phase and the recognition (test) phase. In the identification phase, the dynamics of ECG patterns is accurately modeled and expressed as constant RBF neural weights through the deterministic learning. In the recognition phase, the modeling results are used for ECG pattern recognition. The main feature of the proposed method is that the dynamics of ECG patterns is accurately modeled and is used for ECG pattern recognition. Experimental studies using the Physikalisch-Technische Bundesanstalt (PTB) database are included to demonstrate the effectiveness of the approach.

  20. Wearable technology and ECG processing for fall risk assessment, prevention and detection.

    Science.gov (United States)

    Melillo, Paolo; Castaldo, Rossana; Sannino, Giovanna; Orrico, Ada; de Pietro, Giuseppe; Pecchia, Leandro

    2015-01-01

    Falls represent one of the most common causes of injury-related morbidity and mortality in later life. Subjects with cardiovascular disorders (e.g., related to autonomic dysfunctions and postural hypotension) are at higher risk of falling. Autonomic dysfunctions increasing the risk of falling in the short and mid-term could be assessed by Heart Rate Variability (HRV) extracted by electrocardiograph (ECG). We developed three trials for assessing the usefulness of ECG monitoring using wearable devices for: risk assessment of falling in the next few weeks; prevention of imminent falls due to standing hypotension; and fall detection. Statistical and data-mining methods are adopted to develop classification and regression models, validated with the cross-validation approach. The first classifier based on HRV features enabled to identify future fallers among hypertensive patients with an accuracy of 72% (sensitivity: 51.1%, specificity: 80.2%). The regression model to predict falls due to orthostatic dropdown from HRV recorded before standing achieved an overall accuracy of 80% (sensitivity: 92%, specificity: 90%). Finally, the classifier to detect simulated falls using ECG achieved an accuracy of 77.3% (sensitivity: 81.8%, specificity: 72.7%). The evidence from these three studies showed that ECG monitoring and processing could achieve satisfactory performances compared to other system for risk assessment, fall prevention and detection. This is interesting as differently from other technologies actually employed to prevent falls, ECG is recommended for many other pathologies of later life and is more accepted by senior citizens.

  1. Electrocardiogram Based Identification using a New Effective Intelligent Selection of Fused Features

    Science.gov (United States)

    Abbaspour, Hamidreza; Razavi, Seyyed Mohammad; Mehrshad, Nasser

    2015-01-01

    Over the years, the feasibility of using Electrocardiogram (ECG) signal for human identification issue has been investigated, and some methods have been suggested. In this research, a new effective intelligent feature selection method from ECG signals has been proposed. This method is developed in such a way that it is able to select important features that are necessary for identification using analysis of the ECG signals. For this purpose, after ECG signal preprocessing, its characterizing features were extracted and then compressed using the cosine transform. The more effective features in the identification, among the characterizing features, are selected using a combination of the genetic algorithm and artificial neural networks. The proposed method was tested on three public ECG databases, namely, MIT-BIH Arrhythmias Database, MITBIH Normal Sinus Rhythm Database and The European ST-T Database, in order to evaluate the proposed subject identification method on normal ECG signals as well as ECG signals with arrhythmias. Identification rates of 99.89% and 99.84% and 99.99% are obtained for these databases respectively. The proposed algorithm exhibits remarkable identification accuracies not only with normal ECG signals, but also in the presence of various arrhythmias. Simulation results showed that the proposed method despite the low number of selected features has a high performance in identification task. PMID:25709939

  2. Feature extraction and models for speech: An overview

    Science.gov (United States)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  3. Feature Extraction with Ordered Mean Values for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

  4. A feature extraction technique based on character geometry for character recognition

    CERN Document Server

    Gaurav, Dinesh Dileep

    2012-01-01

    This paper describes a geometry based technique for feature extraction applicable to segmentation-based word recognition systems. The proposed system extracts the geometric features of the character contour. This features are based on the basic line types that forms the character skeleton. The system gives a feature vector as its output. The feature vectors so generated from a training set, were then used to train a pattern recognition engine based on Neural Networks so that the system can be benchmarked.

  5. A Hybrid method of face detection based on Feature Extraction using PIFR and Feature Optimization using TLBO

    Directory of Open Access Journals (Sweden)

    Kapil Verma

    2016-01-01

    Full Text Available In this paper we proposed a face detection method based on feature selection and feature optimization. Now in current research trend of biometric security used the process of feature optimization for better improvement of face detection technique. Basically our face consists of three types of feature such as skin color, texture and shape and size of face. The most important feature of face is skin color and texture of face. In this detection technique used texture feature of face image. For the texture extraction of image face used partial feature extraction function, these function is most promising shape feature analysis. For the selection of feature and optimization of feature used multi-objective TLBO. TLBO algorithm is population based searching technique and defines two constraints function for the process of selection and optimization. The proposed algorithm of face detection based on feature selection and feature optimization process. Initially used face image data base and passes through partial feature extractor function and these transform function gives a texture feature of face image. For the evaluation of performance our proposed algorithm implemented in MATLAB 7.8.0 software and face image used provided by Google face image database. For numerical analysis of result used hit and miss ratio. Our empirical evaluation of result shows better prediction result in compression of PIFR method of face detection.

  6. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically ex

  7. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  8. A new technique of ECG analysis and its application to evaluation of disorders during ventricular tachycardia

    Energy Technology Data Exchange (ETDEWEB)

    Moskalenko, A.V. [Institute of Theoretical and Experimental Biophysics RAS, Institutskaya Street, 3, Pushchino 142290 (Russian Federation)], E-mail: info@avmoskalenko.ru; Rusakov, A.V. [Institute of Theoretical and Experimental Biophysics RAS, Institutskaya Street, 3, Pushchino 142290 (Russian Federation); Elkin, Yu.E. [Institute of Mathematical Problems of Biology RAS, Institutskaya Street, 4, Pushchino 142290 (Russian Federation)

    2008-04-15

    We propose a new technique of ECG analysis to characterize the properties of polymorphic ventricular arrhythmias, potentially life-threatening disorders of cardiac activation. The technique is based on extracting two indices from the ECG fragment. The result is a new detailed quantitative description of polymorphic ECGs. Our observations suggest that the proposed ECG processing algorithm provides information that supplements the traditional visual ECG analysis. The estimates of ECG variation in this study reveal some unexpected details of ventricular activation dynamics, which are possibly useful for diagnosing cardiac rhythm disturbances.

  9. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  10. [Classification technique for hyperspectral image based on subspace of bands feature extraction and LS-SVM].

    Science.gov (United States)

    Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian

    2011-05-01

    The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.

  11. Feature Extraction and Classification of Echo Signal of Ground Penetrating Radar

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hui-lin; TIAN Mao; CHEN Xiao-li

    2005-01-01

    Automatic feature extraction and classification algorithm of echo signal of ground penetrating radar is presented. Dyadic wavelet transform and the average energy of the wavelet coefficients are applied in this paper to decompose and extract feature of the echo signal. Then, the extracted feature vector is fed up to a feed-forward multi-layer perceptron classifier. Experimental results based on the measured GPR echo signals obtained from the Mei-shan railway are presented.

  12. Apriori and N-gram Based Chinese Text Feature Extraction Method

    Institute of Scientific and Technical Information of China (English)

    王晔; 黄上腾

    2004-01-01

    A feature extraction, which means extracting the representative words from a text, is an important issue in text mining field. This paper presented a new Apriori and N-gram based Chinese text feature extraction method, and analyzed its correctness and performance. Our method solves the question that the exist extraction methods cannot find the frequent words with arbitrary length in Chinese texts. The experimental results show this method is feasible.

  13. Spectrum based feature extraction using spectrum intensity ratio for SSVEP detection.

    Science.gov (United States)

    Itai, Akitoshi; Funase, Arao

    2012-01-01

    Recent years, a Steady-State Visual Evoked Potential (SSVEP) is used as a basis for Brain Computer Interface (BCI)[1]. Various feature extraction and classification techniques are proposed to achieve BCI based on SSVEP. The feature extraction of SSVEP is developed in the frequency domain regardless of the limitation in flickering frequency of visual stimulus caused by hardware architecture. We introduce here the feature extraction using a spectrum intensity ratio. Results show that the detection ratio reaches 84% by using a spectrum intensity ratio with unsupervised classification. It also indicates the SSVEP is enhanced by proposed feature extraction with second harmonic.

  14. PyEEG: an open source Python module for EEG/MEG feature extraction.

    Science.gov (United States)

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  15. The ECG characteristic features of apical non-obstructive hypertrophic cardiomyopathy%非梗阻性心尖肥厚型心肌病的心电图特征表现

    Institute of Scientific and Technical Information of China (English)

    李惠玲

    2016-01-01

    目的:探讨非梗阻性心尖肥厚型心肌病患者的心电图特征改变,提高对非梗阻性心尖肥厚型心肌病的鉴别诊断。方法对我院2005年1月—2015年12月收治的42例确诊为非梗阻性心尖肥厚型心肌病患者的超声心电图特点进行总结。结果本组42例心尖非梗阻性肥厚型心肌病患者均有心电图异常改变,胸前导联 T 波倒置均超过0.05 mV,RV5大于2.6 mV 的有18例(42.9%),ST 段压低程度0.05~0.4 mV 的有20例(47.6%),没有患者出现额面心电轴异常和病理性胸前导联 Q 波,合并心房纤颤3例(7.1%)。结论标准12导联 ECG 显示胸导联 V3~V5R 波振幅增高伴对称性倒置 T 波,要高度考虑非梗阻性心尖肥厚型心肌病的可能,心电图异常对非梗阻性心尖肥厚型心肌病诊断提供了特征性依据。%Objective To investigate the characteristic electrocardiogram of apical non-obstructive hypertrophic cardiomyopathy change,improve the differential diagnosis of apical hypertrophic cardiomyopathy. Methods Shanxi Coal General Hospital in January 2005 --2015 Ultrasonics ECG features a total of 42 cases were diagnosed as apical hypertrophic cardiomyopathy hospitalized patients were summarized. Results The group of 42 cases of non-apical hypertrophic obstructive cardiomyopathy patients had abnormal ECG changes were, precordial T wave inversion of more than 0.05mV, RV5 greater than 2.6mV of 18 cases (42.9%), ST segment depression 0.05 degree ~ 0.4mV of 20 patients (47.6%), none of the patients of frontal axis abnormality and pathologic precordial Q wave, atrial fibrillation in 3 patients (7.1%).Conclusion The standard 12-lead ECG display chest leads V3~V5R wave amplitude increased with T-wave inversion symmetry, to consider the height of apical hypertrophic cardiomyopathy may, ECG abnormalities on non-obstructive hypertrophic cardiomyopathy Apical provides diagnostic characteristic basis.

  16. Principal Component Analysis in ECG Signal Processing

    Directory of Open Access Journals (Sweden)

    Andreas Bollmann

    2007-01-01

    Full Text Available This paper reviews the current status of principal component analysis in the area of ECG signal processing. The fundamentals of PCA are briefly described and the relationship between PCA and Karhunen-Loève transform is explained. Aspects on PCA related to data with temporal and spatial correlations are considered as adaptive estimation of principal components is. Several ECG applications are reviewed where PCA techniques have been successfully employed, including data compression, ST-T segment analysis for the detection of myocardial ischemia and abnormalities in ventricular repolarization, extraction of atrial fibrillatory waves for detailed characterization of atrial fibrillation, and analysis of body surface potential maps.

  17. Feature evaluation and extraction based on neural network in analog circuit fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    Yuan Haiying; Chen Guangju; Xie Yongle

    2007-01-01

    Choosing the right characteristic parameter is the key to fault diagnosis in analog circuit.The feature evaluation and extraction methods based on neural network are presented.Parameter evaluation of circuit features is realized by training results from neural network; the superior nonlinear mapping capability is competent for extracting fault features which are normalized and compressed subsequently.The complex classification problem on fault pattern recognition in analog circuit is transferred into feature processing stage by feature extraction based on neural network effectively, which improves the diagnosis efficiency.A fault diagnosis illustration validated this method.

  18. A fingerprint feature extraction algorithm based on curvature of Bezier curve

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Fingerprint feature extraction is a key step of fingerprint identification. A novel feature extraction algorithm is proposed in this paper, which describes fingerprint feature with the bending information of fingerprint ridges. Ridges in the specific region of fingerprint images are traced firstly in the algorithm, and then these ridges are fit with Bezier curve. Finally, the point that has the maximal curvature on Bezier curve is defined as a feature point. Experimental results demonstrate that this kind of feature points characterize the bending trend of fingerprint ridges effectively, and they are robust to noise, in addition, the extraction precision of this algorithm is also better than the conventional approaches.

  19. Feature Extraction of Chinese Materia Medica Fingerprint Based on Star Plot Representation of Multivariate Data

    Institute of Scientific and Technical Information of China (English)

    CUI Jian-xin; HONG Wen-xue; ZHOU Rong-juan; GAO Hai-bo

    2011-01-01

    Objective To study a novel feature extraction method of Chinese materia medica (CMM) fingerprint. Methods On the basis of the radar graphical presentation theory of multivariate, the radar map was used to figure the non-map parameters of the CMM fingerprint, then to extract the map features and to propose the feature fusion. Results Better performance was achieved when using this method to test data. Conclusion This shows that the feature extraction based on radar chart presentation can mine the valuable features that facilitate the identification of Chinese medicine.

  20. FPGA-core defibrillator using wavelet-fuzzy ECG arrhythmia classification.

    Science.gov (United States)

    Nambakhsh, Mohammad; Tavakoli, Vahid; Sahba, Nima

    2008-01-01

    An electrocardiogram (ECG) feature extraction and classification system has been developed and evaluated using Quartus II 7.1 belong to Altera Ltd. In wavelet domain QRS complexes were detected and each complex was used to locate the peaks of the individual waves. Then, fuzzy classifier block used these features to classify ECG beats. Three types of arrhythmias and abnormalities were detected using the procedure. The completed algorithm was embedded into Field Programmable Gate Array (FPGA). The completed prototype was tested through software-generated signals, in which test scenarios covering several kinds of ECG signals on MIT-BIH Database. For the purpose of feeding signals into the FPGA, a software was designed to read signal files and import them to the LPT port of computer that was connected to FPGA. From the results, it was achieved that the proposed prototype could do real time monitoring of ECG signal for arrhythmia detection. We also implemented algorithm in a sequential structure device like AVR microcontroller with 16 MHZ clock for the same purpose. External clock of FPGA is 50 MHZ and by utilizing of Phase Lock Loop (PLL) component inside device, it was possible to increase the clock up to 1.2 GHZ in internal blocks. Final results compare speed and cost of resource usage in both devices. It shows that in cost of more resource usage, FPGA provides higher speed of computation; because FPGA makes the algorithm able to compute most parts in parallel manner.

  1. [Analysis of pacemaker ECGs].

    Science.gov (United States)

    Israel, Carsten W; Ekosso-Ejangue, Lucy; Sheta, Mohamed-Karim

    2015-09-01

    The key to a successful analysis of a pacemaker electrocardiogram (ECG) is the application of the systematic approach used for any other ECG without a pacemaker: analysis of (1) basic rhythm and rate, (2) QRS axis, (3) PQ, QRS and QT intervals, (4) morphology of P waves, QRS, ST segments and T(U) waves and (5) the presence of arrhythmias. If only the most obvious abnormality of a pacemaker ECG is considered, wrong conclusions can easily be drawn. If a systematic approach is skipped it may be overlooked that e.g. atrial pacing is ineffective, the left ventricle is paced instead of the right ventricle, pacing competes with intrinsic conduction or that the atrioventricular (AV) conduction time is programmed too long. Apart from this analysis, a pacemaker ECG which is not clear should be checked for the presence of arrhythmias (e.g. atrial fibrillation, atrial flutter, junctional escape rhythm and endless loop tachycardia), pacemaker malfunction (e.g. atrial or ventricular undersensing or oversensing, atrial or ventricular loss of capture) and activity of specific pacing algorithms, such as automatic mode switching, rate adaptation, AV delay modifying algorithms, reaction to premature ventricular contractions (PVC), safety window pacing, hysteresis and noise mode. A systematic analysis of the pacemaker ECG almost always allows a probable diagnosis of arrhythmias and malfunctions to be made, which can be confirmed by pacemaker control and can often be corrected at the touch of the right button to the patient's benefit.

  2. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    Science.gov (United States)

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  3. STATISTICAL PROBABILITY BASED ALGORITHM FOR EXTRACTING FEATURE POINTS IN 2-DIMENSIONAL IMAGE

    Institute of Scientific and Technical Information of China (English)

    Guan Yepeng; Gu Weikang; Ye Xiuqing; Liu Jilin

    2004-01-01

    An algorithm for automatically extracting feature points is developed after the area of feature points in 2-dimensional (2D) imagebeing located by probability theory, correlated methods and criterion for abnormity. Feature points in 2D image can be extracted only by calculating standard deviation of gray within sampled pixels area in our approach statically. While extracting feature points, the limitation to confirm threshold by tentative method according to some a priori information on processing image can be avoided. It is proved that the proposed algorithm is valid and reliable by extracting feature points on actual natural images with abundant and weak texture, including multi-object with complex background, respectively. It can meet the demand of extracting feature points of 2D image automatically in machine vision system.

  4. Feature extraction for target identification and image classification of OMIS hyperspectral image

    Institute of Scientific and Technical Information of China (English)

    DU Pei-jun; TAN Kun; SU Hong-jun

    2009-01-01

    In order to combine feature extraction operations with specific hyperspectrai remote sensing information processing objectives, two aspects of feature extraction were explored. Based on clustering and decision tree algorithm, spectral absorption index (SAI), continuum-removal and derivative spectral analysis were employed to discover characterized spectral features of dif-ferent targets, and decision trees for identifying a specific class and discriminating different classes were generated. By combining support vector machine (SVM) classifier with different feature extraction strategies including principal component analysis (PCA), minimum noise fraction (MNF), grouping PCA, and derivate spectral analysis, the performance of feature extraction approaches in classification was evaluated. The results show that feature extraction by PCA and derivate spectral analysis are effective to OMIS (operational modular imaging spectrometer) image classification using SVM, and SVM outperforms traditional SAM and MLC classifiers for OMIS data.

  5. Multi-Scale Analysis Based Curve Feature Extraction in Reverse Engineering

    Institute of Scientific and Technical Information of China (English)

    YANG Hongjuan; ZHOU Yiqi; CHEN Chengjun; ZHAO Zhengxu

    2006-01-01

    A sectional curve feature extraction algorithm based on multi-scale analysis is proposed for reverse engineering. The algorithm consists of two parts: feature segmentation and feature classification. In the first part, curvature scale space is applied to multi-scale analysis and original feature detection. To obtain the primary and secondary curve primitives, feature fusion is realized by multi-scale feature detection information transmission. In the second part: projection height function is presented based on the area of quadrilateral, which improved criterions of sectional curve feature classification. Results of synthetic curves and practical scanned sectional curves are given to illustrate the efficiency of the proposed algorithm on feature extraction. The consistence between feature extraction based on multi-scale curvature analysis and curve primitives is verified.

  6. Robust human identification using ecg: eigenpulse revisited

    Science.gov (United States)

    Jang, Daniel; Wendelken, Suzanne; Irvine, John M.

    2010-04-01

    Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also provide a unique biometric signature. Several methods for processing ECG data have appeared in the literature and most approaches rest on an initial detection and segmentation of the heartbeats. Various sources of noise, such as sensor noise, poor sensor placement, or muscle movements, can degrade the ECG signal and introduce errors into the heartbeat segmentation. This paper presents a screening technique for assessing the quality of each segmented heartbeat. Using this technique, a higher quality signal can be extracted to support the identification task. We demonstrate the benefits of this quality screening using a principal component technique known as eigenpulse. The analysis demonstrated the improvement in performance attributable to the quality screening.

  7. Compressive sensing-based feature extraction for bearing fault diagnosis using a heuristic neural network

    Science.gov (United States)

    Yuan, Haiying; Wang, Xiuyu; Sun, Xun; Ju, Zijian

    2017-06-01

    Bearing fault diagnosis collects massive amounts of vibration data about a rotating machinery system, whose fault classification largely depends on feature extraction. Features reflecting bearing work states are directly extracted using time-frequency analysis of vibration signals, which leads to high dimensional feature data. To address the problem of feature dimension reduction, a compressive sensing-based feature extraction algorithm is developed to construct a concise fault feature set. Next, a heuristic PSO-BP neural network, whose learning process perfectly combines particle swarm optimization and the Levenberg-Marquardt algorithm, is constructed for fault classification. Numerical simulation experiments are conducted on four datasets sampled under different severity levels and load conditions, which verify that the proposed fault diagnosis method achieves efficient feature extraction and high classification accuracy.

  8. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  9. QRS DETECTION OF ECG - A STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    I.S. Siva Rao

    2015-03-01

    Full Text Available Electrocardiogram (ECG is a graphical representation generated by heart muscle. ECG plays an important role in diagnosis and monitoring of heart’s condition. The real time analyzer based on filtering, beat recognition, clustering, classification of signal with maximum few seconds delay can be done to recognize the life threatening arrhythmia. ECG signal examines and study of anatomic and physiologic facets of the entire cardiac muscle. The inceptive task for proficient scrutiny is the expulsion of noise. It is attained by the use of wavelet transform analysis. Wavelets yield temporal and spectral information concurrently and offer stretchability with a possibility of wavelet functions of different properties. This paper is concerned with the extraction of QRS complexes of ECG signals using Discrete Wavelet Transform based algorithms aided with MATLAB. By removing the inconsistent wavelet transform coefficient, denoising is done in ECG signal. In continuation, QRS complexes are identified and in which each peak can be utilized to discover the peak of separate waves like P and T with their derivatives. Here we put forth a new combinatory algorithm builded on using Pan-Tompkins' method and multi-wavelet transform.

  10. A Scheme of sEMG Feature Extraction for Improving Myoelectric Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    Shuai Ding; Liang Wang

    2016-01-01

    This paper proposed a feature extraction scheme based on sparse representation considering the non⁃stationary property of surface electromyography ( sEMG ) . Sparse Bayesian Learning ( SBL ) algorithm was introduced to extract the feature with optimal class separability to improve recognition accuracies of multi⁃movement patterns. The SBL algorithm exploited the compressibility ( or weak sparsity) of sEMG signal in some transformed domains. The proposed feature extracted by using the SBL algorithm was named SRC. The feature SRC represented time⁃varying characteristics of sEMG signal very effectively. We investigated the effect of the feature SRC by comparing with other fourteen individual features and eighteen multi⁃feature sets in offline recognition. The results demonstrated the feature SRC revealed the important dynamic information in the sEMG signals. And the multi⁃feature sets formed by the feature SRC and other single features yielded more superior performance on recognition accuracy. The best average recognition accuracy of 91. 67% was gained by using SVM classifier with the multi⁃feature set combining the feature SRC and the feature wavelength ( WL ) . The proposed feature extraction scheme is promising for multi⁃movement recognition with high accuracy.

  11. A Mixed Approach Of Automated ECG Analysis

    Science.gov (United States)

    De, A. K.; Das, J.; Majumder, D. Dutta

    1982-11-01

    ECG is one of the non-invasive and risk-free technique for collecting data about the functional state of the heart. However, all these data-processing techniques can be classified into two basically different approaches -- the first and second generation ECG computer program. Not the opposition, but simbiosis of these two approaches will lead to systems with the highest accuracy. In our paper we are going to describe a mixed approach which will show higher accuracy with lesser amount of computational work. Key Words : Primary features, Patients' parameter matrix, Screening, Logical comparison technique, Multivariate statistical analysis, Mixed approach.

  12. A New Method of Semantic Feature Extraction for Medical Images Data

    Institute of Scientific and Technical Information of China (English)

    XIE Conghua; SONG Yuqing; CHANG Jinyi

    2006-01-01

    In order to overcome the disadvantages of color, shape and texture-based features definition for medical images, this paper defines a new kind of semantic feature and its extraction algorithm. We firstly use kernel density estimation statistical model to describe the complicated medical image data, secondly, define some typical representative pixels of images as feature and finally, take hill-climbing strategy of Artificial Intelligence to extract those semantic features. Results of a content-based medial image retrieve system show that our semantic features have better distinguishing ability than those color, shape and texture-based features and can improve the ratios of recall and precision of this system smartly.

  13. Feature curve extraction from point clouds via developable strip intersection

    Directory of Open Access Journals (Sweden)

    Kai Wah Lee

    2016-04-01

    Full Text Available In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.

  14. Improved Dictionary Formation and Search for Synthetic Aperture Radar Canonical Shape Feature Extraction

    Science.gov (United States)

    2014-03-27

    IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Matthew P. Crosser, Captain, USAF... SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-14-M-21 IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL

  15. Correlation technique and least square support vector machine combine for frequency domain based ECG beat classification.

    Science.gov (United States)

    Dutta, Saibal; Chatterjee, Amitava; Munshi, Sugata

    2010-12-01

    The present work proposes the development of an automated medical diagnostic tool that can classify ECG beats. This is considered an important problem as accurate, timely detection of cardiac arrhythmia can help to provide proper medical attention to cure/reduce the ailment. The proposed scheme utilizes a cross-correlation based approach where the cross-spectral density information in frequency domain is used to extract suitable features. A least square support vector machine (LS-SVM) classifier is developed utilizing the features so that the ECG beats are classified into three categories: normal beats, PVC beats and other beats. This three-class classification scheme is developed utilizing a small training dataset and tested with an enormous testing dataset to show the generalization capability of the scheme. The scheme, when employed for 40 files in the MIT/BIH arrhythmia database, could produce high classification accuracy in the range 95.51-96.12% and could outperform several competing algorithms.

  16. Wear Debris Identification Using Feature Extraction and Neural Network

    Institute of Scientific and Technical Information of China (English)

    王伟华; 马艳艳; 殷勇辉; 王成焘

    2004-01-01

    A method and results of identification of wear debris using their morphological features are presented. The color images of wear debris were used as initial data. Each particle was characterized by a set of numerical parameters combined by its shape, color and surface texture features through a computer vision system. Those features were used as input vector of artificial neural network for wear debris identification. A radius basis function (RBF) network based model suitable for wear debris recognition was established,and its algorithm was presented in detail. Compared with traditional recognition methods, the RBF network model is faster in convergence, and higher in accuracy.

  17. 2D-HIDDEN MARKOV MODEL FEATURE EXTRACTION STRATEGY OF ROTATING MACHINERY FAULT DIAGNOSIS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new feature extraction method based on 2D-hidden Markov model(HMM) is proposed.Meanwhile the time index and frequency index are introduced to represent the new features. The new feature extraction strategy is tested by the experimental data that collected from Bently rotor experiment system. The results show that this methodology is very effective to extract the feature of vibration signals in the rotor speed-up course and can be extended to other non-stationary signal analysis fields in the future.

  18. Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance

    Science.gov (United States)

    Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu

    Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.

  19. Feature Extraction and Spatial Interpolation for Improved Wireless Location Sensing

    Directory of Open Access Journals (Sweden)

    Chris Rizos

    2008-04-01

    Full Text Available This paper proposes a new methodology to improve location-sensing accuracy in wireless network environments eliminating the effects of non-line-of-sight errors. After collecting bulks of anonymous location measurements from a wireless network, the preparation stage of the proposed methodology begins. Investigating the collected location measurements in terms of signal features and geometric features, feature locations are identified. After the identification of feature locations, the non-line-of-sight error correction maps are generated. During the real-time location sensing stage, each user can request localization with a set of location measurements. With respected to the reported measurements, the pre-computed correction maps are applied. As a result, localization accuracy improves by eliminating the non-line-of-sight errors. A simulation result, assuming a typical dense urban environment, demonstrates the benefits of the proposed location sensing methodology.

  20. Combination of heterogeneous EEG feature extraction methods and stacked sequential learning for sleep stage classification.

    Science.gov (United States)

    Herrera, L J; Fernandes, C M; Mora, A M; Migotina, D; Largo, R; Guillen, A; Rosa, A C

    2013-06-01

    This work proposes a methodology for sleep stage classification based on two main approaches: the combination of features extracted from electroencephalogram (EEG) signal by different extraction methods, and the use of stacked sequential learning to incorporate predicted information from nearby sleep stages in the final classifier. The feature extraction methods used in this work include three representative ways of extracting information from EEG signals: Hjorth features, wavelet transformation and symbolic representation. Feature selection was then used to evaluate the relevance of individual features from this set of methods. Stacked sequential learning uses a second-layer classifier to improve the classification by using previous and posterior first-layer predicted stages as additional features providing information to the model. Results show that both approaches enhance the sleep stage classification accuracy rate, thus leading to a closer approximation to the experts' opinion.

  1. Performance Comparison between Different Feature Extraction Techniques with SVM Using Gurumukhi Script

    Directory of Open Access Journals (Sweden)

    Sandeep Dangi,

    2014-07-01

    Full Text Available This paper represent the offline handwritten character recognition for Gurumukhi script. It is a major script of india. Many work has been done in many languages such as English , Chinese , Devanagri , Tamil etc. Gurumukhi is a script of Punjabi Language which is widely spoken across the globe. In this paper focus on better character recognition accuracy. The dataset include 7000 samples collected in different writing styles. These dataset divided in two set Training and Test. For Training set collect 5600 samples and 1400 as test set. The evaluated feature extraction include: Distance Profile, Diagonal feature and BDD(Background Direction Distribution. These features were classified by using SVM classifier. The Performance comparison have been made using one classifier with different feature extraction techniques. The experiment show that Diagonal feature extraction method has achieved highest recognition accuracy 95.39% than other features extraction method.

  2. Comparison of half and full-leaf shape feature extraction for leaf classification

    Science.gov (United States)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  3. An Advanced Approach to Extraction of Colour Texture Features Based on GLCM

    OpenAIRE

    Miroslav Benco; Robert Hudec; Patrik Kamencay; Martina Zachariasova; Slavomir Matuska

    2014-01-01

    This paper discusses research in the area of texture image classification. More specifically, the combination of texture and colour features is researched. The principle objective is to create a robust descriptor for the extraction of colour texture features. The principles of two well-known methods for grey- level texture feature extraction, namely GLCM (grey- level co-occurrence matrix) and Gabor filters, are used in experiments. For the texture classification, the support vector machine is...

  4. LEAST-SQUARES METHOD-BASED FEATURE FITTING AND EXTRACTION IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The main purpose of reverse engineering is to convert discrete data points into piecewise smooth, continuous surface models.Before carrying out model reconstruction it is significant to extract geometric features because the quality of modeling greatly depends on the representation of features.Some fitting techniques of natural quadric surfaces with least-squares method are described.And these techniques can be directly used to extract quadric surfaces features during the process of segmentation for point cloud.

  5. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    Science.gov (United States)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  6. An Effective Fault Feature Extraction Method for Gas Turbine Generator System Diagnosis

    Directory of Open Access Journals (Sweden)

    Jian-Hua Zhong

    2016-01-01

    Full Text Available Fault diagnosis is very important to maintain the operation of a gas turbine generator system (GTGS in power plants, where any abnormal situations will interrupt the electricity supply. The fault diagnosis of the GTGS faces the main challenge that the acquired data, vibration or sound signals, contain a great deal of redundant information which extends the fault identification time and degrades the diagnostic accuracy. To improve the diagnostic performance in the GTGS, an effective fault feature extraction framework is proposed to solve the problem of the signal disorder and redundant information in the acquired signal. The proposed framework combines feature extraction with a general machine learning method, support vector machine (SVM, to implement an intelligent fault diagnosis. The feature extraction method adopts wavelet packet transform and time-domain statistical features to extract the features of faults from the vibration signal. To further reduce the redundant information in extracted features, kernel principal component analysis is applied in this study. Experimental results indicate that the proposed feature extracted technique is an effective method to extract the useful features of faults, resulting in improvement of the performance of fault diagnosis for the GTGS.

  7. Rule set transferability for object-based feature extraction

    NARCIS (Netherlands)

    Anders, N.S.; Seijmonsbergen, Arie C.; Bouten, Willem

    2015-01-01

    Cirques are complex landforms resulting from glacial erosion and can be used to estimate Equilibrium Line Altitudes and infer climate history. Automated extraction of cirques may help research on glacial geomorphology and climate change. Our objective was to test the transferability of an object-

  8. Rule set transferability for object-based feature extraction

    NARCIS (Netherlands)

    Anders, N.S.; Seijmonsbergen, Arie C.; Bouten, Willem

    2015-01-01

    Cirques are complex landforms resulting from glacial erosion and can be used to estimate Equilibrium Line Altitudes and infer climate history. Automated extraction of cirques may help research on glacial geomorphology and climate change. Our objective was to test the transferability of an

  9. Advances in Modern Capacitive ECG Systems for Continuous Cardiovascular Monitoring

    Directory of Open Access Journals (Sweden)

    A. Schommartz

    2011-01-01

    Full Text Available The technique of capacitive electrocardiography (cECG is very promising in a flexible manner. Already integrated into several everyday objects, the single lead cECG system has shown that easy-to-use measurements of electrocardiograms are possible without difficult preparation of the patients. Multi-channel cECG systems enable the extraction of ECG signals even in the presence of coupled interferences, due to the additional redundant information. Thus, this paper presents challenges for electronic hardware design to build on developments in recent years, going from the one-lead cECG system to multi-channel systems in order to provide robust measurements - e.g. even while driving an automobile.

  10. Robust Speech Recognition Method Based on Discriminative Environment Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    HAN Jiqing; GAO Wen

    2001-01-01

    It is an effective approach to learn the influence of environmental parameters,such as additive noise and channel distortions, from training data for robust speech recognition.Most of the previous methods are based on maximum likelihood estimation criterion. However,these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum ClassificationError (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental parameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.

  11. Feature extraction for the analysis of colon status from the endoscopic images

    Directory of Open Access Journals (Sweden)

    Krishnan Shankar M

    2003-04-01

    Full Text Available Abstract Background Extracting features from the colonoscopic images is essential for getting the features, which characterizes the properties of the colon. The features are employed in the computer-assisted diagnosis of colonoscopic images to assist the physician in detecting the colon status. Methods Endoscopic images contain rich texture and color information. Novel schemes are developed to extract new texture features from the texture spectra in the chromatic and achromatic domains, and color features for a selected region of interest from each color component histogram of the colonoscopic images. These features are reduced in size using Principal Component Analysis (PCA and are evaluated using Backpropagation Neural Network (BPNN. Results Features extracted from endoscopic images were tested to classify the colon status as either normal or abnormal. The classification results obtained show the features' capability for classifying the colon's status. The average classification accuracy, which is using hybrid of the texture and color features with PCA (τ = 1%, is 97.72%. It is higher than the average classification accuracy using only texture (96.96%, τ = 1% or color (90.52%, τ = 1% features. Conclusion In conclusion, novel methods for extracting new texture- and color-based features from the colonoscopic images to classify the colon status have been proposed. A new approach using PCA in conjunction with BPNN for evaluating the features has also been proposed. The preliminary test results support the feasibility of the proposed method.

  12. A-Survey of Feature Extraction and Classification Techniques in OCR Systems

    Directory of Open Access Journals (Sweden)

    Rohit Verma

    2012-11-01

    Full Text Available This paper describes a set of feature extraction and classification techniques, which play very important role in the recognition of characters. Feature extraction provides us methods with the help of which we can identify characters uniquely and with high degree of accuracy. Feature extraction helps us to find the shape contained in the pattern. Although a number of techniques are available for feature extraction and classification, but the choice of an excellent technique decides the degree of accuracy of recognition. A lot of research has been done in this field and new techniques of extraction and classification has been developed. The objective of this paper is to review these techniques, so that the set of these techniques can be appreciated.

  13. Interpretation of Normal and Pathological ECG Beats using Multiresolution Wavelet Analysis

    Directory of Open Access Journals (Sweden)

    Shubhada S.Ardhapurkar

    2012-12-01

    Full Text Available The Discrete wavelet transform has great capability to analyse the temporal and spectral properties of non stationary signal like ECG. In this paper, we have developed and evaluated a robust algorithm using multiresolution analysis based on the discrete wavelet transform (DWT for twelve-lead electrocardiogram (ECG temporal feature extraction. In the first step, ECG was denoised considerably by employing kernel density estimation on subband coefficients then QRS complexes were detected. Further, by selecting appropriate coefficients and applying wave segmentation strategy P and T wave peaks were detected. Finally, the determination of P and T wave onsets and ends was performed. The novelty of this approach lies in detection of different morphologies in ECG wave with few decision rules. We have evaluated the algorithm on normal and abnormal beats from various manually annotated databases from physiobank having different sampling frequencies. The QRS detector obtained a sensitivity of 99.5% and a positive predictivity of 98.9% over the first lead of the MIT-BIH Arrhythmia Database.

  14. Texture Feature Extraction Method Combining Nonsubsampled Contour Transformation with Gray Level Co-occurrence Matrix

    Directory of Open Access Journals (Sweden)

    Xiaolan He

    2013-12-01

    Full Text Available Gray level co-occurrence matrix (GLCM is an important method to extract the image texture features of synthetic aperture radar (SAR. However, GLCM can only extract the textures under single scale and single direction. A kind of texture feature extraction method combining nonsubsampled contour transformation (NSCT and GLCM is proposed, so as to achieve the extraction of texture features under multi-scale and multi-direction. We firstly conducted multi-scale and multi-direction decomposition on the SAR images with NSCT, secondly extracted the symbiosis amount with GLCM from the obtained sub-band images, then conducted the correlation analysis for the extracted symbiosis amount to remove the redundant characteristic quantity; and combined it with the gray features to constitute the multi-feature vector. Finally, we made full use of the advantages of the support vector machine in the aspects of small sample database and generalization ability, and completed the division of multi-feature vector space by SVM so as to achieve the SAR image segmentation. The results of the experiment showed that the segmentation accuracy rate could be improved and good edge retention effect could be obtained through using the GLCM texture extraction method based on NSCT domain and multi-feature fusion in the SAR image segmentation.

  15. Bispectrum-based feature extraction technique for devising a practical brain-computer interface

    Science.gov (United States)

    Shahid, Shahjahan; Prasad, Girijesh

    2011-04-01

    The extraction of distinctly separable features from electroencephalogram (EEG) is one of the main challenges in designing a brain-computer interface (BCI). Existing feature extraction techniques for a BCI are mostly developed based on traditional signal processing techniques assuming that the signal is Gaussian and has linear characteristics. But the motor imagery (MI)-related EEG signals are highly non-Gaussian, non-stationary and have nonlinear dynamic characteristics. This paper proposes an advanced, robust but simple feature extraction technique for a MI-related BCI. The technique uses one of the higher order statistics methods, the bispectrum, and extracts the features of nonlinear interactions over several frequency components in MI-related EEG signals. Along with a linear discriminant analysis classifier, the proposed technique has been used to design an MI-based BCI. Three performance measures, classification accuracy, mutual information and Cohen's kappa have been evaluated and compared with a BCI using a contemporary power spectral density-based feature extraction technique. It is observed that the proposed technique extracts nearly recording-session-independent distinct features resulting in significantly much higher and consistent MI task detection accuracy and Cohen's kappa. It is therefore concluded that the bispectrum-based feature extraction is a promising technique for detecting different brain states.

  16. Leveraging Large Data with Weak Supervision for Joint Feature and Opinion Word Extraction

    Institute of Scientific and Technical Information of China (English)

    房磊; 刘彪; 黄民烈

    2015-01-01

    Product feature and opinion word extraction is very important for fine granular sentiment analysis. In this paper, we leverage large-scale unlabeled data for joint extraction of feature and opinion words under a knowledge poor setting, in which only a few feature-opinion pairs are utilized as weak supervision. Our major contributions are two-fold: first, we propose a data-driven approach to represent product features and opinion words as a list of corpus-level syntactic relations, which captures rich language structures;second, we build a simple yet robust unsupervised model with prior knowledge incorporated to extract new feature and opinion words, which obtains high performance robustly. The extraction process is based upon a bootstrapping framework which, to some extent, reduces error propagation under large data. Experimental results under various settings compared with state-of-the-art baselines demonstrate that our method is effective and promising.

  17. Feature Extraction from 3D Point Cloud Data Based on Discrete Curves

    Directory of Open Access Journals (Sweden)

    Yi An

    2013-01-01

    Full Text Available Reliable feature extraction from 3D point cloud data is an important problem in many application domains, such as reverse engineering, object recognition, industrial inspection, and autonomous navigation. In this paper, a novel method is proposed for extracting the geometric features from 3D point cloud data based on discrete curves. We extract the discrete curves from 3D point cloud data and research the behaviors of chord lengths, angle variations, and principal curvatures at the geometric features in the discrete curves. Then, the corresponding similarity indicators are defined. Based on the similarity indicators, the geometric features can be extracted from the discrete curves, which are also the geometric features of 3D point cloud data. The threshold values of the similarity indicators are taken from [0,1], which characterize the relative relationship and make the threshold setting easier and more reasonable. The experimental results demonstrate that the proposed method is efficient and reliable.

  18. Shift- and deformation-robust optical character recognition based on parallel extraction of simple features

    Science.gov (United States)

    Jang, Ju-Seog; Shin, Dong-Hak

    1997-03-01

    For a flexible pattern recognition system that is robust to the input variations, a feature extraction approach is investigated. Two types of features are extracted: one is line orientations, and the other is the eigenvectors of the covariance matrix of the patterns that cannot be distinguished with the line orientation features alone. For the feature extraction, the Vander Lugt-type filters are used, which are recorded in a small spot of holographic recording medium by use of multiplexing techniques. A multilayer perceptron implemented in a computer is trained with a set of optically extracted features, so that it can recognize the input patterns that are not used in the training. Through preliminary experiments, where English character patterns composed of only straight line segments were tested, the feasibility of our approach is demonstrated.

  19. The extraction of wind turbine rolling bearing fault features based on VMD and bispectrum

    Science.gov (United States)

    Yuan, Jingyi; Song, Peng; Wang, Yongjie

    2017-08-01

    Aiming at extracting wind turbine rolling bearing fault feature against the background noise, the method of based on variational mode decomposition and bispectrum were proposed. Firstly, the rolling bearing fault signal was decomposed using VMD. The two components, which had obvious impact features, were extracted and reconstructed using the kurtosis-correlation coefficient criteria. Secondly, the reconstructed signal was analyzed using the bispectrum. The method has good noise suppression capability. Lastly, according to the bispectrum analysis, the fault feature of rolling bearing could be extracted. The analysis of rolling bearing fault simulation signal verifies the effectiveness of the proposed method. And it was applied to extract the fault features of the bearing fault test signal. The different fault features of rolling bearing could be identified effectively. Thus the fault diagnosis can be achieved accurately.

  20. AUTO-EXTRACTING TECHNIQUE OF DYNAMIC CHAOS FEATURES FOR NONLINEAR TIME SERIES

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo

    2006-01-01

    The main purpose of nonlinear time series analysis is based on the rebuilding theory of phase space, and to study how to transform the response signal to rebuilt phase space in order to extract dynamic feature information, and to provide effective approach for nonlinear signal analysis and fault diagnosis of nonlinear dynamic system. Now, it has already formed an important offset of nonlinear science. But, traditional method cannot extract chaos features automatically, and it needs man's participation in the whole process. A new method is put forward, which can implement auto-extracting of chaos features for nonlinear time series. Firstly, to confirm time delay τ by autocorrelation method; Secondly, to compute embedded dimension m and correlation dimension D;Thirdly, to compute the maximum Lyapunov index λmax; Finally, to calculate the chaos degree Dch of features extracting has important meaning to fault diagnosis of nonlinear system based on nonlinear chaos features. Examples show validity of the proposed method.

  1. A novel similarity comparison approach for dynamic ECG series.

    Science.gov (United States)

    Yin, Hong; Zhu, Xiaoqian; Ma, Shaodong; Yang, Shuqiang; Chen, Liqian

    2015-01-01

    The heart sound signal is a reflection of heart and vascular system motion. Long-term continuous electrocardiogram (ECG) contains important information which can be helpful to prevent heart failure. A single piece of a long-term ECG recording usually consists of more than one hundred thousand data points in length, making it difficult to derive hidden features that may be reflected through dynamic ECG monitoring, which is also very time-consuming to analyze. In this paper, a Dynamic Time Warping based on MapReduce (MRDTW) is proposed to make prognoses of possible lesions in patients. Through comparison of a real-time ECG of a patient with the reference sets of normal and problematic cardiac waveforms, the experimental results reveal that our approach not only retains high accuracy, but also greatly improves the efficiency of the similarity measure in dynamic ECG series.

  2. Pattern representation in feature extraction and classifier design: matrix versus vector.

    Science.gov (United States)

    Wang, Zhe; Chen, Songcan; Liu, Jun; Zhang, Daoqiang

    2008-05-01

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern- oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems.

  3. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  4. Edge-Based Feature Extraction Method and Its Application to Image Retrieval

    Directory of Open Access Journals (Sweden)

    G. Ohashi

    2003-10-01

    Full Text Available We propose a novel feature extraction method for content-bases image retrieval using graphical rough sketches. The proposed method extracts features based on the shape and texture of objects. This edge-based feature extraction method functions by representing the relative positional relationship between edge pixels, and has the advantage of being shift-, scale-, and rotation-invariant. In order to verify its effectiveness, we applied the proposed method to 1,650 images obtained from the Hamamatsu-city Museum of Musical Instruments and 5,500 images obtained from Corel Photo Gallery. The results verified that the proposed method is an effective tool for achieving accurate retrieval.

  5. Diagonal Based Feature Extraction for Handwritten Alphabets Recognition System using Neural Network

    CERN Document Server

    Pradeep, J; Himavathi, S; 10.5121/ijcsit.2011.3103

    2011-01-01

    An off-line handwritten alphabetical character recognition system using multilayer feed forward neural network is described in the paper. A new method, called, diagonal based feature extraction is introduced for extracting the features of the handwritten alphabets. Fifty data sets, each containing 26 alphabets written by various people, are used for training the neural network and 570 different handwritten alphabetical characters are used for testing. The proposed recognition system performs quite well yielding higher levels of recognition accuracy compared to the systems employing the conventional horizontal and vertical methods of feature extraction. This system will be suitable for converting handwritten documents into structural text form and recognizing handwritten names.

  6. Wavelet Energy Feature Extraction and Matching for Palmprint Recognition

    Institute of Scientific and Technical Information of China (English)

    Xiang-Qian Wu; Kuan-Quan Wang; David Zhang

    2005-01-01

    According to the fact that the basic features of a palmprint, including principal lines, wrinkles and ridges,have different resolutions, in this paper we analyze palmprints using a multi-resolution method and define a novel palmprint feature, which called wavelet energy feature (WEF), based on the wavelet transform. WEF can reflect the wavelet energy distribution of the principal lines, wrinkles and ridges in different directions at different resolutions (scales), thus it can efficiently characterize palmprints. This paper also analyses the discriminabilities of each level WEF and, according to these discriminabilities, chooses a suitable weight for each level to compute the weighted city block distance for recognition. The experimental results show that the order of the discriminabilities of each level WEF, from strong to weak, is the 4th, 3rd,5th, 2nd and 1st level. It also shows that WEF is robust to some extent in rotation and translation of the images. Accuracies of 99.24% and 99.45% have been obtained in palmprint verification and palmprint identification, respectively. These results demonstrate the power of the proposed approach.

  7. The fuzzy Hough Transform-feature extraction in medical images

    Energy Technology Data Exchange (ETDEWEB)

    Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B. (Univ. of Iowa, Iowa City, IA (United States)); McPherson, D.D.; Gotteiner, N.L. (Northwestern Univ., Chicago, IL (United States). Dept. of Internal Medicine)

    1994-06-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications.

  8. Hand veins feature extraction using DT-CNNS

    Science.gov (United States)

    Malki, Suleyman; Spaanenburg, Lambert

    2007-05-01

    As the identification process is based on the unique patterns of the users, biometrics technologies are expected to provide highly secure authentication systems. The existing systems using fingerprints or retina patterns are, however, very vulnerable. One's fingerprints are accessible as soon as the person touches a surface, while a high resolution camera easily captures the retina pattern. Thus, both patterns can easily be "stolen" and forged. Beside, technical considerations decrease the usability for these methods. Due to the direct contact with the finger, the sensor gets dirty, which decreases the authentication success ratio. Aligning the eye with a camera to capture the retina pattern gives uncomfortable feeling. On the other hand, vein patterns of either a palm of the hand or a single finger offer stable, unique and repeatable biometrics features. A fingerprint-based identification system using Cellular Neural Networks has already been proposed by Gao. His system covers all stages of a typical fingerprint verification procedure from Image Preprocessing to Feature Matching. This paper performs a critical review of the individual algorithmic steps. Notably, the operation of False Feature Elimination is applied only once instead of 3 times. Furthermore, the number of iterations is limited to 1 for all used templates. Hence, the computational need of the feedback contribution is removed. Consequently the computational effort is drastically reduced without a notable chance in quality. This allows a full integration of the detection mechanism. The system is prototyped on a Xilinx Virtex II Pro P30 FPGA.

  9. Automatic extraction of disease-specific features from Doppler images

    Science.gov (United States)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  10. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    Directory of Open Access Journals (Sweden)

    Qingqing Lu

    2014-01-01

    Full Text Available Ground penetrating radar (GPR is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT transforms A-Scan data and approximation coefficients are extracted. Then, fractional Fourier transform (FRFT is used to transform approximation coefficients into fractional domain and we extract features. The features are supplied to the support vector machine (SVM classifiers to automatically identify underground objects material. Experiment results show that the proposed feature-based SVM system has good performances in classification accuracy compared to statistical and frequency domain feature-based SVM system in noisy environment and the classification accuracy of features proposed in this paper has little relationship with the SVM models.

  11. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    Science.gov (United States)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  12. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Komeil Rokni

    2014-05-01

    Full Text Available Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010 in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. This study modeled the spatiotemporal changes of Lake Urmia in the period 2000–2013 using the multi-temporal Landsat 5-TM, 7-ETM+ and 8-OLI images. In doing so, the applicability of different satellite-derived indexes including Normalized Difference Water Index (NDWI, Modified NDWI (MNDWI, Normalized Difference Moisture Index (NDMI, Water Ratio Index (WRI, Normalized Difference Vegetation Index (NDVI, and Automated Water Extraction Index (AWEI were investigated for the extraction of surface water from Landsat data. Overall, the NDWI was found superior to other indexes and hence it was used to model the spatiotemporal changes of the lake. In addition, a new approach based on Principal Components of multi-temporal NDWI (NDWI-PCs was proposed and evaluated for surface water change detection. The results indicate an intense decreasing trend in Lake Urmia surface area in the period 2000–2013, especially between 2010 and 2013 when the lake lost about one third of its surface area compared to the year 2000. The results illustrate the effectiveness of the NDWI-PCs approach for surface water change detection, especially in detecting the changes between two and three different times, simultaneously.

  13. The programmable ECG simulator.

    Science.gov (United States)

    Caner, Candan; Engin, Mehmet; Engin, Erkan Zeki

    2008-08-01

    This paper reports the design and development of Digital Signal Controller (DSPIC)-based ECG simulator intended to use in testing, calibration and maintenance of electrocardiographic equipment, and to support biomedical engineering students' education. It generates all 12 healthy ECG derivation signals having a profile that varies with heart rate, amplitude, and different noise contamination in a manner which reflects true in vivo conditions. The heart rate can be set at the range of 30 to 120 beats/minute in four steps. The noise and power line interference effects can be set at the range of 0 to 20 dB in three steps. Since standard commercially available electronic components were used to construct the prototype simulator, the proposed design was also relatively inexpensive to produce.

  14. The fuzzy Hough transform-feature extraction in medical images.

    Science.gov (United States)

    Philip, K P; Dove, E L; McPherson, D D; Gotteiner, N L; Stanford, W; Chandran, K B

    1994-01-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final (improved) estimate of the true borders with other (subsequently used) image processing techniques. They present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough transform algorithm as part of a larger procedure to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications.

  15. Novel Method for Color Textures Features Extraction Based on GLCM

    Directory of Open Access Journals (Sweden)

    R. Hudec

    2007-12-01

    Full Text Available Texture is one of most popular features for image classification and retrieval. Forasmuch as grayscale textures provide enough information to solve many tasks, the color information was not utilized. But in the recent years, many researchers have begun to take color information into consideration. In the texture analysis field, many algorithms have been enhanced to process color textures and new ones have been researched. In this paper the new method for color GLCM textures and comparing with other good known methods is presented.

  16. Iris image enhancement for feature recognition and extraction

    CSIR Research Space (South Africa)

    Mabuza, GP

    2012-10-01

    Full Text Available Gonzalez, R.C. and Woods, R.E. 2002. Digital Image Processing 2nd Edition, Instructor?s manual .Englewood Cliffs, Prentice Hall, pp 17-36. Proen?a, H. and Alexandre, L.A. 2007. Toward Noncooperative Iris Recognition: A classification approach using... for performing such tasks and yielding better accuracy (Gonzalez & Woods, 2002). METHODOLOGY The block diagram in Figure 2 demonstrates the processes followed to achieve the results. Figure 2: Methodology flow chart Iris image enhancement for feature...

  17. Geometric feature extraction by a multimarked point process.

    Science.gov (United States)

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency.

  18. Medical Image Fusion Based on Feature Extraction and Sparse Representation.

    Science.gov (United States)

    Fei, Yin; Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

  19. FEATURE EXTRACTION OF RETINAL IMAGE FOR DIAGNOSIS OF ABNORMAL EYES

    Directory of Open Access Journals (Sweden)

    S. Praveenkumar

    2011-05-01

    Full Text Available Currently, medical image processing draws intense interests of scien- tists and physicians to aid in clinical diagnosis. The retinal Fundus image is widely used in the diagnosis and treatment of various eye diseases such as Diabetic Retinopathy, glaucoma etc. If these diseases are detected and treated early, many of the visual losses can be pre- vented. This paper presents the methods to detect main features of Fundus images such as optic disk, fovea, exudates and blood vessels. To determine the optic Disk and its centre we find the brightest part of the Fundus. The candidate region of fovea is defined an area circle. The detection of fovea is done by using its spatial relationship with optic disk. Exudates are found using their high grey level variation and their contours are determined by means of morphological recon- struction techniques. The blood vessels are highlighted using bottom hat transform and morphological dilation after edge detection. All the enhanced features are then combined in the Fundus image for the detection of abnormalities in eye.

  20. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data.

    Science.gov (United States)

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-02-06

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew's Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions.

  1. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    Directory of Open Access Journals (Sweden)

    Runtao Yang

    2016-02-01

    Full Text Available The Golgi Apparatus (GA is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP, a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions.

  2. A Review of Feature Extraction Software for Microarray Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Ching Siang Tan

    2014-01-01

    Full Text Available When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA, Independent Component Analysis (ICA, Partial Least Squares (PLS, and Local Linear Embedding (LLE. A summary and sources of the software are provided in the last section for each feature extraction method.

  3. Bio-medical (EMG Signal Analysis and Feature Extraction Using Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Rhutuja Raut

    2015-03-01

    Full Text Available In this paper, the multi-channel electromyogram acquisition system is being developed using programmable system on chip (PSOC microcontroller to obtain the surface of EMG signal. The two pairs of single-channel surface electrodes are utilized to measure the EMG signal obtained from forearm muscles. Then different levels of Wavelet family are used to analyze the EMG signal. Later features in terms of root mean square, logarithm of root mean square, centroid of frequency, as well as standard deviation were used to extract the EMG signal. The proposed method of feature extraction for extracting EMG signal states that root means square feature extraction method gives better performance as compared to the other features. In the near future, this method can be used to control a mechanical arm as well as robotic arm in field of real-time processing.

  4. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    Science.gov (United States)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  5. Micromotion feature extraction of radar target using tracking pulses with adaptive pulse repetition frequency adjustment

    Science.gov (United States)

    Chen, Yijun; Zhang, Qun; Ma, Changzheng; Luo, Ying; Yeo, Tat Soon

    2014-01-01

    In multifunction phased array radar systems, different activities (e.g., tracking, searching, imaging, feature extraction, recognition, etc.) would need to be performed simultaneously. To relieve the conflict of the radar resource distribution, a micromotion feature extraction method using tracking pulses with adaptive pulse repetition frequencies (PRFs) is proposed in this paper. In this method, the idea of a varying PRF is utilized to solve the frequency-domain aliasing problem of the micro-Doppler signal. With appropriate atom set construction, the micromotion feature can be extracted and the image of the target can be obtained based on the Orthogonal Matching Pursuit algorithm. In our algorithm, the micromotion feature of a radar target is extracted from the tracking pulses and the quality of the constructed image is fed back into the radar system to adaptively adjust the PRF of the tracking pulses. Finally, simulation results illustrate the effectiveness of the proposed method.

  6. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    Science.gov (United States)

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  7. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    Directory of Open Access Journals (Sweden)

    Xianglong Chen

    2016-09-01

    Full Text Available Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  8. Steps toward subject-specific classification in ECG-based detection of sleep apnea.

    Science.gov (United States)

    Maier, Christoph; Wenz, Heinrich; Dickhaus, Hartmut

    2011-11-01

    This study deals with ECG-based recognition of sleep apnea in epochs of 1 min duration using spectral- and correlation-based features extracted from the modulation of QRS amplitude, respiratory myogram interference and RR intervals. On a database comprising 140 simultaneous recordings of polysomnograms (PSGs) and 8-lead Holter-ECGs, it is shown that a single-parameter ROC threshold classification can achieve high detection rates up to 81.0% sensitivity and 85.6% specificity. Still, individual accuracy may be low, and the improvement employing feature combination by means of second order polynomial classifiers is only marginal. We speculate that individual differences, like co-morbidities, and even intra-individual confounding factors, like nocturnal changes in body position (BP), are major reasons for the difficulties to significantly raise the detection rate using multivariate techniques, which is evident in virtually all papers on that subject. Using the BP information in the PSG, we show a potential benefit for individualized single-feature classifiers by comparing the maximally achievable individual and global accuracy when either one optimal global threshold for the total dataset, individual threshold values for each subject or individual thresholds for each BP are applied. We developed an ECG-based BP segmentation algorithm and finally suggest a potential strategy to derive individually optimized subject-specific threshold values.

  9. D Feature Point Extraction from LIDAR Data Using a Neural Network

    Science.gov (United States)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  10. Automatic extraction of geometric lip features with application to multi-modal speaker identification

    OpenAIRE

    Arsic, I.; Vilagut Abad, R.; Thiran, J.

    2006-01-01

    In this paper we consider the problem of automatic extraction of the geometric lip features for the purposes of multi-modal speaker identification. The use of visual information from the mouth region can be of great importance for improving the speaker identification system performance in noisy conditions. We propose a novel method for automated lip features extraction that utilizes color space transformation and a fuzzy-based c-means clustering technique. Using the obtained visual cues close...

  11. Spatial and Spectral Nonparametric Linear Feature Extraction Method for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Jinn-Min Yang

    2016-11-01

    Full Text Available Feature extraction (FE or dimensionality reduction (DR plays quite an important role in the field of pattern recognition. Feature extraction aims to reduce the dimensionality of the high-dimensional dataset to enhance the classification accuracy and foster the classification speed, particularly when the training sample size is small, namely the small sample size (SSS problem. Remotely sensed hyperspectral images (HSIs are often with hundreds of measured features (bands which potentially provides more accurate and detailed information for classification, but it generally needs more samples to estimate parameters to achieve a satisfactory result. The cost of collecting ground-truth of remotely sensed hyperspectral scene can be considerably difficult and expensive. Therefore, FE techniques have been an important part for hyperspectral image classification. Unlike lots of feature extraction methods are based only on the spectral (band information of the training samples, some feature extraction methods integrating both spatial and spectral information of training samples show more effective results in recent years. Spatial contexture information has been proven to be useful to improve the HSI data representation and to increase classification accuracy. In this paper, we propose a spatial and spectral nonparametric linear feature extraction method for hyperspectral image classification. The spatial and spectral information is extracted for each training sample and used to design the within-class and between-class scatter matrices for constructing the feature extraction model. The experimental results on one benchmark hyperspectral image demonstrate that the proposed method obtains stable and satisfactory results than some existing spectral-based feature extraction.

  12. Robust Speech Recognition Using Temporal Pattern Feature Extracted From MTMLP Structure

    Directory of Open Access Journals (Sweden)

    Yasser Shekofteh

    2014-10-01

    Full Text Available Temporal Pattern feature of a speech signal could be either extracted from the time domain or via their front-end vectors. This feature includes long-term information of variations in the connected speech units. In this paper, the second approach is followed, i.e. the features which are the cases of temporal computations, consisting of Spectral-based (LFBE and Cepstrum-based (MFCC feature vectors, are considered. To extract these features, we use posterior probability-based output of the proposed MTMLP neural networks. The combination of the temporal patterns, which represents the long-term dynamics of the speech signal, together with some traditional features, composed of the MFCC and its first and second derivatives are evaluated in an ASR task. It is shown that the use of such a combined feature vector results in the increase of the phoneme recognition accuracy by more than 1 percent regarding the results of the baseline system, which does not benefit from the long-term temporal patterns. In addition, it is shown that the use of extracted features by the proposed method gives robust recognition under different noise conditions (by 13 percent and, therefore, the proposed method is a robust feature extraction method.

  13. EEG Signal Denoising and Feature Extraction Using Wavelet Transform in Brain Computer Interface

    Institute of Scientific and Technical Information of China (English)

    WU Ting; YAN Guo-zheng; YANG Bang-hua; SUN Hong

    2007-01-01

    Electroencephalogram (EEG) signal preprocessing is one of the most important techniques in brain computer interface (BCI). The target is to increase signal-to-noise ratio and make it more favorable for feature extraction and pattern recognition. Wavelet transform is a method of multi-resolution time-frequency analysis, it can decompose the mixed signals which consist of different frequencies into different frequency band. EEG signal is analyzed and denoised using wavelet transform. Moreover, wavelet transform can be used for EEG feature extraction. The energies of specific sub-bands and corresponding decomposition coefficients which have maximal separability according to the Fisher distance criterion are selected as features. The eigenvector for classification is obtained by combining the effective features from different channels. The performance is evaluated by separability and pattern recognition accuracy using the data set of BCI 2003 Competition, the final classification results have proved the effectiveness of this technology for EEG denoising and feature extraction.

  14. A feature extraction method for the signal sorting of interleaved radar pulse serial

    Institute of Scientific and Technical Information of China (English)

    GUO Qiang; ZHANG Xingzhou; LI Zheng

    2007-01-01

    In this paper,a new feature extraction method for radar pulse sequences is presented based on structure function and empirical mode decomposition,In this method,2-D feature information was constituted by using radio frequency and time-of-arrival,which analyzed the feature of radar pulse sequences for the very first time by employing structure function and empirical mode decomposition.The experiment shows that the method can efficiently extract the frequency of a period-change radio frequency signal in a complex pulses environment and reveals a new feature for the signal sorting of interleaved radar pulse serial.This paper provides a novel way for extracting the new sorting feature of radar signals.

  15. A Method of SAR Target Recognition Based on Gabor Filter and Local Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Wang Lu

    2015-12-01

    Full Text Available This paper presents a novel texture feature extraction method based on a Gabor filter and Three-Patch Local Binary Patterns (TPLBP for Synthetic Aperture Rader (SAR target recognition. First, SAR images are processed by a Gabor filter in different directions to enhance the significant features of the targets and their shadows. Then, the effective local texture features based on the Gabor filtered images are extracted by TPLBP. This not only overcomes the shortcoming of Local Binary Patterns (LBP, which cannot describe texture features for large scale neighborhoods, but also maintains the rotation invariant characteristic which alleviates the impact of the direction variations of SAR targets on recognition performance. Finally, we use an Extreme Learning Machine (ELM classifier and extract the texture features. The experimental results of MSTAR database demonstrate the effectiveness of the proposed method.

  16. NEW METHOD FOR WEAK FAULT FEATURE EXTRACTION BASED ON SECOND GENERATION WAVELET TRANSFORM AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Duan Chendong; He Zhengjia; Jiang Hongkai

    2004-01-01

    A new time-domain analysis method that uses second generation wavelet transform (SGWT) for weak fault feature extraction is proposed. To extract incipient fault feature, a biorthogonal wavelet with the characteristics of impact is constructed by using SGWT. Processing detail signal of SGWT with a sliding window devised on the basis of rotating operation cycle, and extracting modulus maximum from each window, fault features in time-domain are highlighted. To make further analysis on the reason of the fault, wavelet package transform based on SGWT is used to process vibration data again. Calculating the energy of each frequency-band, the energy distribution features of the signal are attained. Then taking account of the fault features and the energy distribution, the reason of the fault is worked out. An early impact-rub fault caused by axis misalignment and rotor imbalance is successfully detected by using this method in an oil refinery.

  17. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    Science.gov (United States)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  18. Aggregation of Electric Current Consumption Features to Extract Maintenance KPIs

    Science.gov (United States)

    Simon, Victor; Johansson, Carl-Anders; Galar, Diego

    2017-09-01

    All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs) from the electric current signal. Depending on the time window, sampling frequency and type of analysis, different indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or consumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine's future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.

  19. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    Science.gov (United States)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  20. Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language.

    Science.gov (United States)

    Shanableh, Tamer; Assaleh, Khaled; Al-Rousan, M

    2007-06-01

    This paper presents various spatio-temporal feature-extraction techniques with applications to online and offline recognitions of isolated Arabic Sign Language gestures. The temporal features of a video-based gesture are extracted through forward, backward, and bidirectional predictions. The prediction errors are thresholded and accumulated into one image that represents the motion of the sequence. The motion representation is then followed by spatial-domain feature extractions. As such, the temporal dependencies are eliminated and the whole video sequence is represented by a few coefficients. The linear separability of the extracted features is assessed, and its suitability for both parametric and nonparametric classification techniques is elaborated upon. The proposed feature-extraction scheme was complemented by simple classification techniques, namely, K nearest neighbor (KNN) and Bayesian, i.e., likelihood ratio, classifiers. Experimental results showed classification performance ranging from 97% to 100% recognition rates. To validate our proposed technique, we have conducted a series of experiments using the classical way of classifying data with temporal dependencies, namely, hidden Markov models (HMMs). Experimental results revealed that the proposed feature-extraction scheme combined with simple KNN or Bayesian classification yields comparable results to the classical HMM-based scheme. Moreover, since the proposed scheme compresses the motion information of an image sequence into a single image, it allows for using simple classification techniques where the temporal dimension is eliminated. This is actually advantageous for both computational and storage requirements of the classifier.

  1. A Fast Feature Extraction Method Based on Integer Wavelet Transform for Hyperspectral Images

    Institute of Scientific and Technical Information of China (English)

    GUYanfeng; ZHANGYe; YUShanshan

    2004-01-01

    Hyperspectral remote sensing provides high-resolution spectral data and the potential for remote discrimination between subtle differences in ground covers. However, the high-dimensional data space generated by the hyperspectral sensors creates a new challenge for conventional spectral data analysis techniques. A challenging problem in using hyperspectral data is to eliminate redundancy and preserve useful spectral information for applications. In this paper, a Fast feature extraction (FFE) method based on integer wavelet transform is proposed to extract useful features and reduce dimensionality of hyperspectral images. The FFE method can be directly used to extract useful features from spectral vector of each pixel resident in the hyperspectral images. The FFE method has two main merits: high computational efficiency and good ability to extract spectral features. In order to better testify the effectiveness and the performance of the proposed method, classification experiments of hyperspectral images are performed on two groups of AVIRIS (Airborne visible/infrared imaging spectrometer) data respectively. In addition, three existing methods for feature extraction of hyperspectral images, i.e. PCA, SPCT and Wavelet Transform, are performed on the same data for comparison with the proposed method. The experimental investigation shows that the efficiency of the FFE method for feature extraction outclasses those of the other three methods mentioned above.

  2. Object learning improves feature extraction but does not improve feature selection.

    Directory of Open Access Journals (Sweden)

    Linus Holm

    Full Text Available A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1 select more informative image locations upon which to fixate their eyes, or 2 extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice.

  3. Wireless Smartphone ECG Enables Large-Scale Screening in Diverse Populations.

    Science.gov (United States)

    Haberman, Zachary C; Jahn, Ryan T; Bose, Rupan; Tun, Han; Shinbane, Jerold S; Doshi, Rahul N; Chang, Philip M; Saxon, Leslie A

    2015-05-01

    The ubiquitous presence of internet-connected phones and tablets presents a new opportunity for cost-effective and efficient electrocardiogram (ECG) screening and on-demand diagnosis. Wireless, single-lead real-time ECG monitoring supported by iOS and android devices can be obtained quickly and on-demand. ECGs can be immediately downloaded and reviewed using any internet browser. We compared the standard 12-lead ECG to the smartphone ECG in healthy young adults, elite athletes, and cardiology clinic patients. Accuracy for determining baseline ECG intervals and rate and rhythm was assessed. In 381 participants, 30-second lead I ECG waveforms were obtained using an iPhone case or iPad. Standard 12-lead ECGs were acquired immediately after the smartphone tracing was obtained. De-identified ECGs were interpreted by automated algorithms and adjudicated by two board-certified electrophysiologists. Both smartphone and standard ECGs detected atrial rate and rhythm, AV block, and QRS delay with equal accuracy. Sensitivities ranged from 72% (QRS delay) to 94% (atrial fibrillation). Specificities were all above 94% for both modalities. Smartphone ECG accurately detects baseline intervals, atrial rate, and rhythm and enables screening in diverse populations. Efficient ECG analysis using automated discrimination and an enhanced smartphone application with notification capabilities are features that can be easily incorporated into the acquisition process. © 2015 Wiley Periodicals, Inc.

  4. Automatic Extraction of Three Dimensional Prismatic Machining Features from CAD Model

    Directory of Open Access Journals (Sweden)

    B.V. Sudheer Kumar

    2011-12-01

    Full Text Available Machining features recognition provides the necessary platform for the computer aided process planning (CAPP and plays a key role in the integration of computer aided design (CAD and computer aided manufacturing (CAM. This paper presents a new methodology for extracting features from the geometrical data of the CAD Model present in the form of Virtual Reality Modeling Language (VRML files. First, the point cloud is separated into the available number of horizontal cross sections. Each cross section consists of a 2D point cloud. Then, a collection of points represented by a set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for a feature-extraction. These extracted manufacturing features, gives the necessary information regarding the manufacturing activities tomanufacture the part. Software in Microsoft Visual C++ environment is developed to recognize the features, where geometric information of the part isextracted from the CAD model. By using this data, anoutput file i.e., text file is generated, which gives all the machinable features present in the part. This process has been tested on various parts and successfully extracted all the features

  5. Sparse Representation and Dictionary Learning as Feature Extraction in Vessel Imagery

    Science.gov (United States)

    2014-12-01

    TECHNICAL REPORT 2070 December 2014 Sparse Representation and Dictionary Learning as Feature Extraction in Vessel Imagery...2 2.1.1 Dictionary Learning...8]. The descriptors are then clustered and pooled with respect to a dictionary of vocabulary features obtained from training imagery. The image is

  6. SVD-TLS extending Prony algorithm for extracting UWB radar target feature

    Institute of Scientific and Technical Information of China (English)

    Liu Donghong; Hu Wenlong; Chen Zhijie

    2008-01-01

    A now method, SVD-TLS extending Prony algorithm, is introduced for extracting UWB radar target features. The method is a modified classical Prony method based on singular value decomposition and total least squares that can improve robust for spectrum estimation. Simulation results show that poles and residuum of target echo can be extracted effectively using this method, and at the same time, random noises can be restrained to some degree. It is applicable for target feature extraction such as UWB radar or other high resolution range radars.

  7. Prediction of occult invasive disease in ductal carcinoma in situ using computer-extracted mammographic features

    Science.gov (United States)

    Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.

    2017-03-01

    Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 +/- 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.

  8. Improved Framework for Breast Cancer Detection using Hybrid Feature Extraction Technique and FFNN

    Directory of Open Access Journals (Sweden)

    Ibrahim Mohamed Jaber Alamin

    2016-10-01

    Full Text Available Breast Cancer early detection using terminologies of image processing is suffered from the less accuracy performance in different automated medical tools. To improve the accuracy, still there are many research studies going on different phases such as segmentation, feature extraction, detection, and classification. The proposed framework is consisting of four main steps such as image preprocessing, image segmentation, feature extraction and finally classification. This paper presenting the hybrid and automated image processing based framework for breast cancer detection. For image preprocessing, both Laplacian and average filtering approach is used for smoothing and noise reduction if any. These operations are performed on 256 x 256 sized gray scale image. Output of preprocessing phase is used at efficient segmentation phase. Algorithm is separately designed for preprocessing step with goal of improving the accuracy. Segmentation method contributed for segmentation is nothing but the improved version of region growing technique. Thus breast image segmentation is done by using proposed modified region growing technique. The modified region growing technique overcoming the limitations of orientation as well as intensity. The next step we proposed is feature extraction, for this framework we have proposed to use combination of different types of features such as texture features, gradient features, 2D-DWT features with higher order statistics (HOS. Such hybrid feature set helps to improve the detection accuracy. For last phase, we proposed to use efficient feed forward neural network (FFNN. The comparative study between existing 2D-DWT feature extraction and proposed HOS-2D-DWT based feature extraction methods is proposed.

  9. The Hybrid KICA-GDA-LSSVM Method Research on Rolling Bearing Fault Feature Extraction and Classification

    Directory of Open Access Journals (Sweden)

    Jiyong Li

    2015-01-01

    Full Text Available Rolling element bearings are widely used in high-speed rotating machinery; thus proper monitoring and fault diagnosis procedure to avoid major machine failures is necessary. As feature extraction and classification based on vibration signals are important in condition monitoring technique, and superfluous features may degrade the classification performance, it is needed to extract independent features, so LSSVM (least square support vector machine based on hybrid KICA-GDA (kernel independent component analysis-generalized discriminate analysis is presented in this study. A new method named sensitive subband feature set design (SSFD based on wavelet packet is also presented; using proposed variance differential spectrum method, the sensitive subbands are selected. Firstly, independent features are obtained by KICA; the feature redundancy is reduced. Secondly, feature dimension is reduced by GDA. Finally, the projected feature is classified by LSSVM. The whole paper aims to classify the feature vectors extracted from the time series and magnitude of spectral analysis and to discriminate the state of the rolling element bearings by virtue of multiclass LSSVM. Experimental results from two different fault-seeded bearing tests show good performance of the proposed method.

  10. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    Science.gov (United States)

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  11. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    Directory of Open Access Journals (Sweden)

    Evelio José González

    2009-12-01

    Full Text Available In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  12. Interplay of spatial aggregation and computational geometry in extracting diagnostic features from cardiac activation data.

    Science.gov (United States)

    Ironi, Liliana; Tentoni, Stefania

    2012-09-01

    Functional imaging plays an important role in the assessment of organ functions, as it provides methods to represent the spatial behavior of diagnostically relevant variables within reference anatomical frameworks. The salient physical events that underly a functional image can be unveiled by appropriate feature extraction methods capable to exploit domain-specific knowledge and spatial relations at multiple abstraction levels and scales. In this work we focus on general feature extraction methods that can be applied to cardiac activation maps, a class of functional images that embed spatio-temporal information about the wavefront propagation. The described approach integrates a qualitative spatial reasoning methodology with techniques borrowed from computational geometry to provide a computational framework for the automated extraction of basic features of the activation wavefront kinematics and specific sets of diagnostic features that identify an important class of rhythm pathologies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  14. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  15. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  16. Application of Texture Characteristics for Urban Feature Extraction from Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    D.Shanmukha Rao

    2014-12-01

    Full Text Available Quest of fool proof methods for extracting various urban features from high resolution satellite imagery with minimal human intervention has resulted in developing texture based algorithms. In view of the fact that the textural properties of images provide valuable information for discrimination purposes, it is appropriate to employ texture based algorithms for feature extraction. The Gray Level Co-occurrence Matrix (GLCM method represents a highly efficient technique of extracting second order statistical texture features. The various urban features can be distinguished based on a set of features viz. energy, entropy, homogeneity etc. that characterize different aspects of the underlying texture. As a preliminary step, notable numbers of regions of interests of the urban feature and contrast locations are identified visually. After calculating Gray Level Co-occurrence matrices of these selected regions, the aforementioned texture features are computed. These features can be used to shape a high-dimensional feature vector to carry out content based retrieval. The insignificant features are eliminated to reduce the dimensionality of the feature vector by executing Principal Components Analysis (PCA. The selection of the discriminating features is also aided by the value of Jeffreys-Matusita (JM distance which serves as a measure of class separability Feature identification is then carried out by computing these chosen feature vectors for every pixel of the entire image and comparing it with their corresponding mean values. This helps in identifying and classifying the pixels corresponding to urban feature being extracted. To reduce the commission errors, various index values viz. Soil Adjusted Vegetation Index (SAVI, Normalized Difference Vegetation Index (NDVI and Normalized Difference Water Index (NDWI are assessed for each pixel. The extracted output is then median filtered to isolate the feature of interest after removing the salt and pepper

  17. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    Science.gov (United States)

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2016-09-12

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  18. Flexible Graphene Electrodes for Prolonged Dynamic ECG Monitoring

    Directory of Open Access Journals (Sweden)

    Cunguang Lou

    2016-11-01

    Full Text Available This paper describes the development of a graphene-based dry flexible electrocardiography (ECG electrode and a portable wireless ECG measurement system. First, graphene films on polyethylene terephthalate (PET substrates and graphene paper were used to construct the ECG electrode. Then, a graphene textile was synthesized for the fabrication of a wearable ECG monitoring system. The structure and the electrical properties of the graphene electrodes were evaluated using Raman spectroscopy, scanning electron microscopy (SEM, and alternating current impedance spectroscopy. ECG signals were then collected from healthy subjects using the developed graphene electrode and portable measurement system. The results show that the graphene electrode was able to acquire the typical characteristics and features of human ECG signals with a high signal-to-noise (SNR ratio in different states of motion. A week-long continuous wearability test showed no degradation in the ECG signal quality over time. The graphene-based flexible electrode demonstrates comfortability, good biocompatibility, and high electrophysiological detection sensitivity. The graphene electrode also combines the potential for use in long-term wearable dynamic cardiac activity monitoring systems with convenience and comfort for use in home health care of elderly and high-risk adults.

  19. Special features of SCF solid extraction of natural products: deoiling of wheat gluten and extraction of rose hip oil

    Directory of Open Access Journals (Sweden)

    Eggers R.

    2000-01-01

    Full Text Available Supercritical CO2 extraction has shown great potential in separating vegetable oils as well as removing undesirable oil residuals from natural products. The influence of process parameters, such as pressure, temperature, mass flow and particle size, on the mass transfer kinetics of different natural products has been studied by many authors. However, few publications have focused on specific features of the raw material (moisture, mechanical pretreatment, bed compressibility, etc., which could play an important role, particularly in the scale-up of extraction processes. A review of the influence of both process parameters and specific features of the material on oilseed extraction is given in Eggers (1996. Mechanical pretreatment has been commonly used in order to facilitate mass transfer from the material into the supercritical fluid. However, small particle sizes, especially when combined with high moisture contents, may lead to inefficient extraction results. This paper focuses on the problems that appear during scale-up in processes on a lab to pilot or industrial plant scale related to the pretreatment of material, the control of initial water content and vessel shape. Two applications were studied: deoiling of wheat gluten with supercritical carbon dioxide to produce a totally oil-free (< 0.1 % oil powder (wheat gluten and the extraction of oil from rose hip seeds. Different ways of pretreating the feed material were successfully tested in order to develop an industrial-scale gluten deoiling process. The influence of shape and size of the fixed bed on the extraction results was also studied. In the case of rose hip seeds, the present work discusses the influence of pretreatment of the seeds prior to the extraction process on extraction kinetics.

  20. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  1. FAST DISCRETE CURVELET TRANSFORM BASED ANISOTROPIC FEATURE EXTRACTION FOR IRIS RECOGNITION

    Directory of Open Access Journals (Sweden)

    Amol D. Rahulkar

    2010-11-01

    Full Text Available The feature extraction plays a very important role in iris recognition. Recent researches on multiscale analysis provide good opportunity to extract more accurate information for iris recognition. In this work, a new directional iris texture features based on 2-D Fast Discrete Curvelet Transform (FDCT is proposed. The proposed approach divides the normalized iris image into six sub-images and the curvelet transform is applied independently on each sub-image. The anisotropic feature vector for each sub-image is derived using the directional energies of the curvelet coefficients. These six feature vectors are combined to create the resultant feature vector. During recognition, the nearest neighbor classifier based on Euclidean distance has been used for authentication. The effectiveness of the proposed approach has been tested on two different databases namely UBIRIS and MMU1. Experimental results show the superiority of the proposed approach.

  2. Applying a Locally Linear Embedding Algorithm for Feature Extraction and Visualization of MI-EEG

    Directory of Open Access Journals (Sweden)

    Mingai Li

    2016-01-01

    Full Text Available Robotic-assisted rehabilitation system based on Brain-Computer Interface (BCI is an applicable solution for stroke survivors with a poorly functioning hemiparetic arm. The key technique for rehabilitation system is the feature extraction of Motor Imagery Electroencephalography (MI-EEG, which is a nonlinear time-varying and nonstationary signal with remarkable time-frequency characteristic. Though a few people have made efforts to explore the nonlinear nature from the perspective of manifold learning, they hardly take into full account both time-frequency feature and nonlinear nature. In this paper, a novel feature extraction method is proposed based on the Locally Linear Embedding (LLE algorithm and DWT. The multiscale multiresolution analysis is implemented for MI-EEG by DWT. LLE is applied to the approximation components to extract the nonlinear features, and the statistics of the detail components are calculated to obtain the time-frequency features. Then, the two features are combined serially. A backpropagation neural network is optimized by genetic algorithm and employed as a classifier to evaluate the effectiveness of the proposed method. The experiment results of 10-fold cross validation on a public BCI Competition dataset show that the nonlinear features visually display obvious clustering distribution and the fused features improve the classification accuracy and stability. This paper successfully achieves application of manifold learning in BCI.

  3. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    Science.gov (United States)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  4. A Fault Feature Extraction Method for Rolling Bearing Based on Pulse Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Jinbao Yao

    2016-01-01

    Full Text Available Shock pulse method is a widely used technique for condition monitoring of rolling bearing. However, it may cause erroneous diagnosis in the presence of strong background noise or other shock sources. Aiming at overcoming the shortcoming, a pulse adaptive time-frequency transform method is proposed to extract the fault features of the damaged rolling bearing. The method arranges the rolling bearing shock pulses extracted by shock pulse method in the order of time and takes the reciprocal of the time interval between the pulse at any moment and the other pulse as all instantaneous frequency components in the moment. And then it visually displays the changing rule of each instantaneous frequency after plane transformation of the instantaneous frequency components, realizes the time-frequency transform of shock pulse sequence through time-frequency domain amplitude relevancy processing, and highlights the fault feature frequencies by effective instantaneous frequency extraction, so as to extract the fault features of the damaged rolling bearing. The results of simulation and application show that the proposed method can suppress the noises well, highlight the fault feature frequencies, and avoid erroneous diagnosis, so it is an effective fault feature extraction method for the rolling bearing with high time-frequency resolution.

  5. Satellite Imagery Cadastral Features Extractions using Image Processing Algorithms: A Viable Option for Cadastral Science

    Directory of Open Access Journals (Sweden)

    Usman Babawuro

    2012-07-01

    Full Text Available Satellite images are used for feature extraction among other functions. They are used to extract linear features, like roads, etc. These linear features extractions are important operations in computer vision. Computer vision has varied applications in photogrammetric, hydrographic, cartographic and remote sensing tasks. The extraction of linear features or boundaries defining the extents of lands, land covers features are equally important in Cadastral Surveying. Cadastral Surveying is the cornerstone of any Cadastral System. A two dimensional cadastral plan is a model which represents both the cadastral and geometrical information of a two dimensional labeled Image. This paper aims at using and widening the concepts of high resolution Satellite imagery data for extracting representations of cadastral boundaries using image processing algorithms, hence minimizing the human interventions. The Satellite imagery is firstly rectified hence establishing the satellite imagery in the correct orientation and spatial location for further analysis. We, then employ the much available Satellite imagery to extract the relevant cadastral features using computer vision and image processing algorithms. We evaluate the potential of using high resolution Satellite imagery to achieve Cadastral goals of boundary detection and extraction of farmlands using image processing algorithms. This method proves effective as it minimizes the human demerits associated with the Cadastral surveying method, hence providing another perspective of achieving cadastral goals as emphasized by the UN cadastral vision. Finally, as Cadastral science continues to look to the future, this research aimed at the analysis and getting insights into the characteristics and potential role of computer vision algorithms using high resolution satellite imagery for better digital Cadastre that would provide improved socio economic development.

  6. Comparative Analysis of the Discriminative Capacity of EEG, Two ECG-Derived and Respiratory Signals in Automatic Sleep Staging

    Directory of Open Access Journals (Sweden)

    Farideh Ebrahimi

    2017-01-01

    Full Text Available Highly accurate classification of sleep stages is possible based on EEG signals alone. However, reliable and high quality acquisition of these signals in the home environment is difficult. Instead, electrocardiogram (ECG and Respiratory (Res signals are easier to record and may offer a practical alternative for home monitoring of sleep. Therefore, automatic sleep staging was performed using ECG, Res (thoracic excursion and EEG signals from 31 nocturnal recordings of the Sleep Heart Health Study (SHHS polysomnography Database. Feature vectors were extracted from 0.5 min (standard epochs of sleep data by time-domain, frequency domain, time-frequency and nonlinear methods and optimized by using the Support Vector Machine-Recursive Feature Elimination (SVM-RFE method. These features were then classified by using a SVM. Classification based upon EEG features produced a Correct Classification Ratio CCR=0.92. In comparison, features derived from ECG signals alone, that is the combination of Heart Rate Variability (HRV, and ECG-Derived Respiration (EDR signals produced a CCR=0.54, while those features based on the combination of HRV and (thoracic Res signals resulted in a CCR=0.57. Overall comparison of the results based on standard epochs of EEG signals with those obtained from 5-minute (long epochs of cardiorespiratory signals, revealed that acceptable CCR=0.81 and discriminative capacity (Accuracy=89.32%, Specificity=92.88% and Sensitivity=78.64% were also achievable when using optimal feature sets derived from long epochs of the latter signals in sleep staging. In addition, it was observed that the presence of some artifacts (like bigeminy in the cardiorespiratory signals reduced the accuracy of automatic sleep staging more than the artifacts that contaminated the EEG signals.

  7. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  8. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    Science.gov (United States)

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A Novel Feature Selection Strategy for Enhanced Biomedical Event Extraction Using the Turku System

    Directory of Open Access Journals (Sweden)

    Jingbo Xia

    2014-01-01

    Full Text Available Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion.

  10. Low-Level Color and Texture Feature Extraction of Coral Reef Components

    Directory of Open Access Journals (Sweden)

    Ma. Sheila Angeli Marcos

    2003-06-01

    Full Text Available The purpose of this study is to develop a computer-based classifier that automates coral reef assessmentfrom digitized underwater video. We extract low-level color and texture features from coral images toserve as input to a high-level classifier. Low-level features for color were labeled blue, green, yellow/brown/orange, and gray/white, which are described by the normalized chromaticity histograms of thesemajor colors. The color matching capability of these features was determined through a technique called“Histogram Backprojection”. The low-level texture feature marks a region as coarse or fine dependingon the gray-level variance of the region.

  11. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning play a critical role for visual perception tasks. We focus on improving the robustness of the kernel descriptors (KDES) by embedding context cues and further learning a compact and discriminative feature codebook for feature reduction using Rényi entropy based mutual...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...

  12. A Frequent Pattern Mining Algorithm for Feature Extraction of Customer Reviews

    Directory of Open Access Journals (Sweden)

    Seyed Hamid Ghorashi

    2012-07-01

    Full Text Available Online shoppers often have different idea about the same product. They look for the product features that are consistent with their goal. Sometimes a feature might be interesting for one, while it does not make that impression for someone else. Unfortunately, identifying the target product with particular features is a tough task which is not achievable with existing functionality provided by common websites. In this paper, we present a frequent pattern mining algorithm to mine a bunch of reviews and extract product features. Our experimental results indicate that the algorithm outperforms the old pattern mining techniques used by previous researchers.

  13. A new Color Feature Extraction method Based on Dynamic Color Distribution Entropy of Neighbourhoods

    Directory of Open Access Journals (Sweden)

    Fatemeh Alamdar

    2011-09-01

    Full Text Available One of the important requirements in image retrieval, indexing, classification, clustering and etc. is extracting efficient features from images. The color feature is one of the most widely used visual features. Use of color histogram is the most common way for representing color feature. One of disadvantage of the color histogram is that it does not take the color spatial distribution into consideration. In this paper dynamic color distribution entropy of neighborhoods method based on color distribution entropy is presented, which effectively describes the spatial information of colors. The image retrieval results in compare to improved color distribution entropy show the acceptable efficiency of this approach.

  14. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  15. Brugada ECG patterns in athletes.

    Science.gov (United States)

    Chung, Eugene H

    2015-01-01

    Brugada syndrome is responsible for up to 4% of all sudden cardiac deaths worldwide and up to 20% of sudden cardiac deaths in patients with structurally normal hearts. Heterogeneity of repolarization and depolarization, particularly over the right ventricle and the outflow tract, is responsible for the arrhythmogenic substrate. The coved Type I ECG pattern is considered diagnostic of the syndrome but its prevalence is very low. Distinguishing between a saddle back Type 2 Brugada pattern and one of many "Brugada-like" patterns presents challenges especially in athletes. A number of criteria have been proposed to assess Brugada ECG patterns. Proper precordial ECG lead placement is paramount. This paper reviews Brugada syndrome, Brugada ECG patterns, and recently proposed criteria. Recommendations for evaluating a Brugada ECG pattern are provided.

  16. Active Shape Model of Combining Pca and Ica: Application to Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    DENG Lin; RAO Ni-ni; WANG Gang

    2006-01-01

    Active Shape Model (ASM) is a powerful statistical tool to extract the facial features of a face image under frontal view. It mainly relies on Principle Component Analysis (PCA) to statistically model the variability in the training set of example shapes. Independent Component Analysis (ICA) has been proven to be more efficient to extract face features than PCA . In this paper, we combine the PCA and ICA by the consecutive strategy to form a novel ASM. Firstly, an initial model, which shows the global shape variability in the training set, is generated by the PCA-based ASM. And then, the final shape model, which contains more local characters, is established by the ICA-based ASM. Experimental results verify that the accuracy of facial feature extraction is statistically significantly improved by applying the ICA modes after the PCA modes.

  17. Focal-plane CMOS wavelet feature extraction for real-time pattern recognition

    Science.gov (United States)

    Olyaei, Ashkan; Genov, Roman

    2005-09-01

    Kernel-based pattern recognition paradigms such as support vector machines (SVM) require computationally intensive feature extraction methods for high-performance real-time object detection in video. The CMOS sensory parallel processor architecture presented here computes delta-sigma (ΔΣ)-modulated Haar wavelet transform on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental oversampling analog-to-digital converters (ADCs). Each ADC performs distributed spatial focal-plane sampling and concurrent weighted average quantization. The architecture is benchmarked in SVM face detection on the MIT CBCL data set. At 90% detection rate, first-level Haar wavelet feature extraction yields a 7.9% reduction in the number of false positives when compared to classification with no feature extraction. The architecture yields 1.4 GMACS simulated computational throughput at SVGA imager resolution at 8-bit output depth.

  18. Feature extraction of induction motor stator fault based on particle swarm optimization and wavelet packet

    Institute of Scientific and Technical Information of China (English)

    WANG Pan-pan; SHI Li-ping; HU Yong-jun; MIAO Chang-xin

    2012-01-01

    To effectively extract the interturn short circuit fault features of induction motor from stator current signal,a novel feature extraction method based on the bare-bones particle swarm optimization (BBPSO) algorithm and wavelet packet was proposed.First,according to the maximum inner product between the current signal and the cosine basis functions,this method could precisely estimate the waveform parameters of the fundamental component using the powerful global search capability of the BBPSO,which can eliminate the fundamental component and not affect other harmonic components.Then,the harmonic components of residual current signal were decomposed to a series of frequency bands by wavelet packet to extract the interturn circuit fault features of the induction motor.Finally,the results of simulation and laboratory tests demonstrated the effectiveness of the proposed method.

  19. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image

    Science.gov (United States)

    Xi, Wenfei; Shi, Zhengtao; Li, Dongsheng

    2017-07-01

    Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV), this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.

  20. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    Science.gov (United States)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  1. Evaluation of various feature extraction methods for landmine detection using hidden Markov models

    Science.gov (United States)

    Hamdi, Anis; Frigui, Hichem

    2012-06-01

    Hidden Markov Models (HMM) have proved to be eective for detecting buried land mines using data collected by a moving-vehicle-mounted ground penetrating radar (GPR). The general framework for a HMM-based landmine detector consists of building a HMM model for mine signatures and a HMM model for clutter signatures. A test alarm is assigned a condence proportional to the probability of that alarm being generated by the mine model and inversely proportional to its probability in the clutter model. The HMM models are built based on features extracted from GPR training signatures. These features are expected to capture the salient properties of the 3-dimensional alarms in a compact representation. The baseline HMM framework for landmine detection is based on gradient features. It models the time varying behavior of GPR signals, encoded using edge direction information, to compute the likelihood that a sequence of measurements is consistent with a buried landmine. In particular, the HMM mine models learns the hyperbolic shape associated with the signature of a buried mine by three states that correspond to the succession of an increasing edge, a at edge, and a decreasing edge. Recently, for the same application, other features have been used with dierent classiers. In particular, the Edge Histogram Descriptor (EHD) has been used within a K-nearest neighbor classier. Another descriptor is based on Gabor features and has been used within a discrete HMM classier. A third feature, that is closely related to the EHD, is the Bar histogram feature. This feature has been used within a Neural Networks classier for handwritten word recognition. In this paper, we propose an evaluation of the HMM based landmine detection framework with several feature extraction techniques. We adapt and evaluate the EHD, Gabor, Bar, and baseline gradient feature extraction methods. We compare the performance of these features using a large and diverse GPR data collection.

  2. An Efficient Method for Extracting Features from Blurred Fingerprints Using Modified Gabor Filter

    Directory of Open Access Journals (Sweden)

    R.Vinothkanna

    2012-09-01

    Full Text Available Biometrics is the science and technology of measuring and analyzing biological data. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements for authentication purposes. Fingerprint is one of the most developed biometrics, with more history, research and design. Fingerprint recognition identifies people by using the impressions made by the minute ridge formations or patterns found on the fingertips. The extraction of features from blurred or unclear fingerprints becomes difficult. So instead of ridges we tried to extract valleys from same images, because fingerprints consist of both ridges and valleys as its features. We found some good results for valley extraction with different filters including Gabor filter. So in this paper we modified the Gabor filter to reduce the time consumption and also for extraction of more valleys than Gabor filter.

  3. 3D FEATURE POINT EXTRACTION FROM LIDAR DATA USING A NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Y. Feng

    2016-06-01

    Full Text Available Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  4. Automation of lidar-based hydrologic feature extraction workflows using GIS

    Science.gov (United States)

    Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.

    2016-10-01

    With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.

  5. Web News Extraction via Tag Path Feature Fusion Using DS Theory

    Institute of Scientific and Technical Information of China (English)

    Gong-Qing Wu; Lei Li; Li Li; Xindong Wu

    2016-01-01

    Contents, layout styles, and parse structures of web news pages differ greatly from one page to another. In addition, the layout style and the parse structure of a web news page may change from time to time. For these reasons, how to design features with excellent extraction performances for massive and heterogeneous web news pages is a challenging issue. Our extensive case studies indicate that there is potential relevancy between web content layouts and their tag paths. Inspired by the observation, we design a series of tag path extraction features to extract web news. Because each feature has its own strength, we fuse all those features with the DS (Dempster-Shafer) evidence theory, and then design a content extraction method CEDS. Experimental results on both CleanEval datasets and web news pages selected randomly from well-known websites show that the F1-score with CEDS is 8.08%and 3.08%higher than existing popular content extraction methods CETR and CEPR-TPR respectively.

  6. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    Science.gov (United States)

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  7. Feature extraction of ship radiated-noise by 11/2-spectrum

    Institute of Scientific and Technical Information of China (English)

    FAN Yangyu; TAO Baoqi; XIONG Ke; SHANG Jiuhao; SUN Jincai; LI Yaan

    2002-01-01

    The properties of 11/2-spectrum are proved and the performances are analyzed. By means of the spectrum, the basic frequency component of the harmonic signals can be enhanced. Gaussian color noise and symetrical distribution noise can be canceled. And non-quadratic phase coupling harmonic components in harmonic signal can be reduced. The ship radiated-noise is analyzed and its 7 features are extracted by the spectrum. By means of B-P artificial neural network, three type ships are classified according to extracted features. The classification results about the three type ships A, B and C are 90% , 91.3% and 85.7% , respectively.

  8. [Studies on pharmacokinetics features of characteristic active ingredients of daidai flavone extract in different physiological status].

    Science.gov (United States)

    Zeng, Ling-Jun; Chen, Dan; Zheng, Li; Lian, Yun-Fang; Cai, Wei-Wei; Huang, Qun; Lin, Yi-Li

    2014-01-01

    In order to explore the clinical hypolipidemic features of Daidai flavone extract, the pharmacokinetics features of characteristic active ingredients of Daidai flavone extract in normal and hyperlipemia rats were studied and compared. The study established the quantitative determination method of naringin and neohesperidin in plasma by UPLC-MS. Study compared the pharmacokinetics differences of naringin and noehesperidin in normal and hyperlipemia rats on the basis of establishment of hyperlipemia model. Results indicated that the pharmacokinetics features of characteristic active ingredients of Daidai flavone extract in normal and hyperlipemia rats showed significant differences. The C(max) of naringin and neohesperidin in hyperlipemia rats plasma after oral administration of Daidai flavone extract increased obviously, while t1/2, MRT and AUC0-24 h decreased, compared to normal rats. But t(max) showed no differences to that of normal rats. The results further proved Daidai flavone extract would have better hypolipidemic effect in the hyperlipemia pathological status. And the characteristic active ingredients naringin and noehesperidin were the material base of Daidai flavone extract to express the hypolipidemic effect.

  9. Impulse feature extraction method for machinery fault detection using fusion sparse coding and online dictionary learning

    Directory of Open Access Journals (Sweden)

    Deng Sen

    2015-04-01

    Full Text Available Impulse components in vibration signals are important fault features of complex machines. Sparse coding (SC algorithm has been introduced as an impulse feature extraction method, but it could not guarantee a satisfactory performance in processing vibration signals with heavy background noises. In this paper, a method based on fusion sparse coding (FSC and online dictionary learning is proposed to extract impulses efficiently. Firstly, fusion scheme of different sparse coding algorithms is presented to ensure higher reconstruction accuracy. Then, an improved online dictionary learning method using FSC scheme is established to obtain redundant dictionary and it can capture specific features of training samples and reconstruct the sparse approximation of vibration signals. Simulation shows that this method has a good performance in solving sparse coefficients and training redundant dictionary compared with other methods. Lastly, the proposed method is further applied to processing aircraft engine rotor vibration signals. Compared with other feature extraction approaches, our method can extract impulse features accurately and efficiently from heavy noisy vibration signal, which has significant supports for machinery fault detection and diagnosis.

  10. An Adequate Approach to Image Retrieval Based on Local Level Feature Extraction

    Directory of Open Access Journals (Sweden)

    Sumaira Muhammad Hayat Khan

    2010-10-01

    Full Text Available Image retrieval based on text annotation has become obsolete and is no longer interesting for scientists because of its high time complexity and low precision in results. Alternatively, increase in the amount of digital images has generated an excessive need for an accurate and efficient retrieval system. This paper proposes content based image retrieval technique at a local level incorporating all the rudimentary features. Image undergoes the segmentation process initially and each segment is then directed to the feature extraction process. The proposed technique is also based on image?s content which primarily includes texture, shape and color. Besides these three basic features, FD (Fourier Descriptors and edge histogram descriptors are also calculated to enhance the feature extraction process by taking hold of information at the boundary. Performance of the proposed method is found to be quite adequate when compared with the results from one of the best local level CBIR (Content Based Image Retrieval techniques.

  11. A Novel Feature Extraction Scheme for Medical X-Ray Images

    OpenAIRE

    Prachi.G.Bhende; Dr.A.N.Cheeran

    2016-01-01

    X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform reliable matching between different views of an obje...

  12. Representation and Metrics Extraction from Feature Basis: An Object Oriented Approach

    Directory of Open Access Journals (Sweden)

    Fausto Neri da Silva Vanin

    2010-10-01

    Full Text Available This tutorial presents an object oriented approach to data reading and metrics extraction from feature basis. Structural issues about basis are discussed first, then the Object Oriented Programming (OOP is aplied to modeling the main elements in this context. The model implementation is then discussed using C++ as programing language. To validate the proposed model, we apply on some feature basis from the University of Carolina, Irvine Machine Learning Database.

  13. Extracting invariable fault features of rotating machines with multi-ICA networks

    Institute of Scientific and Technical Information of China (English)

    焦卫东; 杨世锡; 吴昭同

    2003-01-01

    This paper proposes novel multi-layer neural networks based on Independent Component Analysis for feature extraction of fault modes. By the use of ICA, invariable features embedded in multi-channel vibration measurements under different operating conditions (rotating speed and/or load) can be captured together.Thus, stable MLP classifiers insensitive to the variation of operation conditions are constructed. The successful results achieved by selected experiments indicate great potential of ICA in health condition monitoring of rotating machines.

  14. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  15. [Management of pacemaker patients by bathtub ECG].

    Science.gov (United States)

    Mizukami, H; Togawa, T; Toyoshima, T; Ishijima, M

    1989-01-01

    We evaluated the efficacy of a new method for recording electrocardiogram (ECG) of pacemaker patients in bathtub (bathtub ECG). ECG from the pacemaker implanted patients in the bathtub with tap water was recorded through three silver/silver chloride electrodes (4 x 4 cm) fitted on the inside wall of bathtub. Electric signal was connected to the isolated amplifier and recorded on the strip chart recorder. Contrast to the conventional method for recording standard ECG, bathtub ECG does not require body surface electrodes and is recorded at patients home. Although the amplitude of bathtub ECG was reduced approximately to a quarter of standard ECG, cardiac arrhythmia can be easily interpreted by bathtub ECG. In patients with pacemaker system, the amplitude of the pacing pulse recorded by bathtub ECG was much larger than that of QRS complex recorded by standard ECG. Therefore, we conclude that bathtub ECG would be a suitable method to follow up patients with pacemaker system.

  16. An Efficient Feature Extraction Method Based on Entropy for Power Quality Disturbance

    Directory of Open Access Journals (Sweden)

    P. Kailasapathi

    2014-09-01

    Full Text Available This study explores the applicability of entropy defined as thermodynamic state variable introduced by German Physicists Rudolf clausius and also presents the concepts and application of said state variable as a measure of system disorganization. Later an entropy-based feature Analysis method for power quality disturbance analysis has been proposed. Feature extraction of a disturbed power signal provides information that helps to detect the responsible fault for power quality disturbance. A precise and faster feature extraction tool helps power engineers to monitor and maintain power disturbances more efficiently. Firstly, the decomposition coefficients are obtained by applying 10-level wavelet multi resolution analysis to the signals (normal, sag, swell, outage, harmonic and sag with harmonic and swell with harmonic generated by using the parametric equations. Secondly, a combined feature vector is obtained from standard deviation of these features after distinctive features for each signal are extracted by applying the energy, the Shannon entropy and the log-energy entropy methods to decomposition coefficients. Finally the entropy methods detect the different types of power quality disturbance.

  17. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  18. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    Science.gov (United States)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  19. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  20. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  1. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  2. The clinical and pathologic features of coronary heart disease ECG%冠心病患者150例临床心电图特征和病理特点

    Institute of Scientific and Technical Information of China (English)

    蔡红梅

    2013-01-01

    Objective to investigate the clinical and pathological characters of ecG ofcoronary heart disease.Methods 150 cases of coronary heart disease (CHD) patients(A group) and 12 lead ECG examination, and 55 cases of healthy adult (group B).Results of the 150 cases of coronary heart disease (CHD) patients (A group) and 110 healthy adult (group B) have signiifcant differences in ECG changes (P < 0.05).Conclusion ECG in detecting coronary artery disease (CHD) of fast and effective method.%目的:探讨冠心病的病理和临床心电图特征。方法选择150例冠心病(cHd)患者(a组)12导联心电图检查,与55例健康成人(B组)。结果150例冠心病(cHd)患者(a组)和110例健康成人(B组)心电图的变化有显著差异(P<0.05)。结论心电图检测冠心病(cHd)的快速和有效的方法。

  3. Image Analysis of Soil Micromorphology: Feature Extraction, Segmentation, and Quality Inference

    Directory of Open Access Journals (Sweden)

    Petros Maragos

    2004-06-01

    Full Text Available We present an automated system that we have developed for estimation of the bioecological quality of soils using various image analysis methodologies. Its goal is to analyze soilsection images, extract features related to their micromorphology, and relate the visual features to various degrees of soil fertility inferred from biochemical characteristics of the soil. The image methodologies used range from low-level image processing tasks, such as nonlinear enhancement, multiscale analysis, geometric feature detection, and size distributions, to object-oriented analysis, such as segmentation, region texture, and shape analysis.

  4. A New Feature Extraction Algorithm Based on Entropy Cloud Characteristics of Communication Signals

    Directory of Open Access Journals (Sweden)

    Jingchao Li

    2015-01-01

    Full Text Available Identifying communication signals under low SNR environment has become more difficult due to the increasingly complex communication environment. Most relevant literatures revolve around signal recognition under stable SNR, but not applicable under time-varying SNR environment. To solve this problem, we propose a new feature extraction method based on entropy cloud characteristics of communication modulation signals. The proposed algorithm extracts the Shannon entropy and index entropy characteristics of the signals first and then effectively combines the entropy theory and cloud model theory together. Compared with traditional feature extraction methods, instability distribution characteristics of the signals’ entropy characteristics can be further extracted from cloud model’s digital characteristics under low SNR environment by the proposed algorithm, which improves the signals’ recognition effects significantly. The results from the numerical simulations show that entropy cloud feature extraction algorithm can achieve better signal recognition effects, and even when the SNR is −11 dB, the signal recognition rate can still reach 100%.

  5. Attributed Relational Graph Based Feature Extraction of Body Poses In Indian Classical Dance Bharathanatyam

    Directory of Open Access Journals (Sweden)

    Athira. Sugathan

    2014-05-01

    Full Text Available Articulated body pose estimation in computer vision is an important problem because of convolution of the models. It is useful in real time applications such as surveillance camera, computer games, human computer interaction etc. Feature extraction is the main part in pose estimation which helps for a successful classification. In this paper, we propose a system for extracting the features from the relational graph of articulated upper body poses of basic Bharatanatyam steps, each performed by different persons of different experiences and size. Our method has the ability to extract features from an attributed relational graph from challenging images with background clutters, clothing diversity, illumination etc. The system starts with skeletonization process which determines the human pose and increases the smoothness using B-Spline approach. Attributed relational graph is generated and the geometrical features are extracted for the correct discrimination between shapes that can be useful for classification and annotation of dance poses. We evaluate our approach experimentally on 2D images of basic Bharatanatyam poses.

  6. Aircraft micro-doppler feature extraction from high range resolution profiles

    CSIR Research Space (South Africa)

    Berndt, RJ

    2015-10-01

    Full Text Available and aircraft propellers from high range resolution profiles. The two features extracted are rotation rate harmonic (related to the rotation rate and number of blades of the scattering propeller/rotor) and the relative down range location of modulating propeller...

  7. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...

  8. Spectral and bispectral feature-extraction neural networks for texture classification

    Science.gov (United States)

    Kameyama, Keisuke; Kosugi, Yukio

    1997-10-01

    A neural network model (Kernel Modifying Neural Network: KM Net) specialized for image texture classification, which unifies the filtering kernels for feature extraction and the layered network classifier, will be introduced. The KM Net consists of a layer of convolution kernels that are constrained to be 2D Gabor filters to guarantee efficient spectral feature localization. The KM Net enables an automated feature extraction in multi-channel texture classification through simultaneous modification of the Gabor kernel parameters (central frequency and bandwidth) and the connection weights of the subsequent classifier layers by a backpropagation-based training rule. The capability of the model and its training rule was verified via segmentation of common texture mosaic images. In comparison with the conventional multi-channel filtering method which uses numerous filters to cover the spatial frequency domain, the proposed strategy can greatly reduce the computational cost both in feature extraction and classification. Since the adaptive Gabor filtering scheme is also applicable to band selection in moment spectra of higher orders, the network model was extended for adaptive bispectral filtering for extraction of the phase relation among the frequency components. The ability of this Bispectral KM Net was demonstrated in the discrimination of visually discriminable synthetic textures with identical local power spectral distributions.

  9. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-

  10. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line data-proc

  11. A New Skeleton Feature Extraction Method for Terrain Model Using Profile Recognition and Morphological Simplification

    Directory of Open Access Journals (Sweden)

    Huijie Zhang

    2013-01-01

    Full Text Available It is always difficul to reserve rings and main truck lines in the real engineering of feature extraction for terrain model. In this paper, a new skeleton feature extraction method is proposed to solve these problems, which put forward a simplification algorithm based on morphological theory to eliminate the noise points of the target points produced by classical profile recognition. As well all know, noise point is the key factor to influence the accuracy and efficiency of feature extraction. Our method connected the optimized feature points subset after morphological simplification; therefore, the efficiency of ring process and pruning has been improved markedly, and the accuracy has been enhanced without the negative effect of noisy points. An outbranching concept is defined, and the related algorithms are proposed to extract sufficient long trucks, which is capable of being consistent with real terrain skeleton. All of algorithms are conducted on many real experimental data, including GTOPO30 and benchmark data provided by PPA to verify the performance and accuracy of our method. The results showed that our method precedes PPA as a whole.

  12. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  13. A Novel Approach to Extracting Casing Status Features Using Data Mining

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2013-12-01

    Full Text Available Casing coupling location signals provided by the magnetic localizer in retractors are typically used to ascertain the position of casing couplings in horizontal wells. However, the casing coupling location signal is usually submerged in noise, which will result in the failure of casing coupling detection under the harsh logging environment conditions. The limitation of Shannon wavelet time entropy, in the feature extraction of casing status, is presented by analyzing its application mechanism, and a corresponding improved algorithm is subsequently proposed. On the basis of wavelet transform, two derivative algorithms, singular values decomposition and Tsallis entropy theory, are proposed and their physics meanings are researched. Meanwhile, a novel data mining approach to extract casing status features with Tsallis wavelet singularity entropy is put forward in this paper. The theoretical analysis and experiment results indicate that the proposed approach can not only extract the casing coupling features accurately, but also identify the characteristics of perforation and local corrosion in casings. The innovation of the paper is in the use of simple wavelet entropy algorithms to extract the complex nonlinear logging signal features of a horizontal well tractor.

  14. Extraction of Building Features from Stand-Off Measured Through-Wall Radar Data

    NARCIS (Netherlands)

    Wit, J.J.M. de; Rossum, W.L. van

    2016-01-01

    Automated extraction of building features is a great aid in synthesizing building maps from radar data. In this paper, a model-based method is described to detect and classify canonical scatters, such as corners and planar walls, inside a building. Once corners and walls have been located, a buildin

  15. Improving local PCA in pseudo phase space for fetal heart rate estimation from single lead abdominal ECG.

    Science.gov (United States)

    Wei, Zheng; Hongxing, Liu; Jianchun, Cheng

    2011-12-01

    This paper proposes an improved local principal component analysis (LPCA) in pseudo phase space for fetal heart rate estimation from a single lead abdominal ECG signal. The improved LPCA process can extract both the maternal ECG component and the fetal ECG component in an abdominal signal. The instantaneous fetal heart rate can then be estimated from the extracted fetal ECG waveform. Compared with the classical LPCA procedure and another single lead based fetal heart rate estimation method, our improved LPCA method has shown better robustness and efficiency in fetal heart estimation, testing with synthetic ECG signals and a real fetal ECG database from PhysioBank. For the real fetal ECG validating dataset of six long-duration recordings (obtained between the 22(nd) and 40(th) week of gestation), the average accuracy of the improved LPCA method is 84.1%.

  16. False ventricular tachycardia alarm suppression in the ICU based on the discrete wavelet transform in the ECG signal.

    Science.gov (United States)

    Salas-Boni, Rebeca; Bai, Yong; Harris, Patricia Rae Eileen; Drew, Barbara J; Hu, Xiao

    2014-01-01

    Over the past few years, reducing the number of false positive cardiac monitor alarms (FA) in the intensive care unit (ICU) has become an issue of the utmost importance. In our work, we developed a robust methodology that, without the need for additional non-ECG waveforms, suppresses false positive ventricular tachycardia (VT) alarms without resulting in false negative alarms. Our approach is based on features extracted from the ECG signal 20 seconds prior to a triggered alarm. We applied a multi resolution wavelet transform to the ECG data 20seconds prior to the alarm trigger, extracted features from appropriately chosen scales and combined them across all available leads. These representations are presented to a L1-regularized logistic regression classifier. Results are shown in two datasets of physiological waveforms with manually assessed cardiac monitor alarms: the MIMIC II dataset, where we achieved a false alarm (FA) suppression of 21% with zero true alarm (TA) suppression; and a dataset compiled by UCSF and General Electric, where a 36% FA suppression was achieved with a zero TA suppression. The methodology described in this work could be implemented to reduce the number of false monitor alarms in other arrhythmias. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. The Rolling Bearing Fault Feature Extraction Based on the LMD and Envelope Demodulation

    Directory of Open Access Journals (Sweden)

    Jun Ma

    2015-01-01

    Full Text Available Since the working process of rolling bearings is a complex and nonstationary dynamic process, the common time and frequency characteristics of vibration signals are submerged in the noise. Thus, it is the key of fault diagnosis to extract the fault feature from vibration signal. Therefore, a fault feature extraction method for the rolling bearing based on the local mean decomposition (LMD and envelope demodulation is proposed. Firstly, decompose the original vibration signal by LMD to get a series of production functions (PFs. Then dispose the envelope demodulation analysis on PF component. Finally, perform Fourier Transform on the demodulation signals and judge failure condition according to the dominant frequency of the spectrum. The results show that the proposed method can correctly extract the fault characteristics to diagnose faults.

  18. Extraction of Spatial-Temporal Features for Vision-Based Gesture Recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG YU; XU Guangyou; ZHU Yuanxin

    2000-01-01

    One of the key problems in a vision-based gesture recognition system is the extraction of spatial-temporal features of gesturing.In this paper an approach of motion-based segmentation is proposed to realize this task.The direct method cooperated with the robust M-estimator to estimate the affine parameters of gesturing motion is used, and based on the dominant motion model the gesturing region is extracted, i.e.,the dominant object. So the spatial-temporal features of gestures can be extracted. Finally, the dynamic time warping (DTW) method is directly used to perform matching of 12 control gestures (6 for"translation"orders,6 for"rotation"orders).A small demonstration system has been set up to verify the method, in which a panorama image viewer can be controlled (set by mosaicing a sequence of standard"Garden"images) with recognized gestures instead of the 3-D mouse tool.

  19. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    Science.gov (United States)

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  20. [Research on non-rigid medical image registration algorithm based on SIFT feature extraction].

    Science.gov (United States)

    Wang, Anna; Lu, Dan; Wang, Zhe; Fang, Zhizhen

    2010-08-01

    In allusion to non-rigid registration of medical images, the paper gives a practical feature points matching algorithm--the image registration algorithm based on the scale-invariant features transform (Scale Invariant Feature Transform, SIFT). The algorithm makes use of the image features of translation, rotation and affine transformation invariance in scale space to extract the image feature points. Bidirectional matching algorithm is chosen to establish the matching relations between the images, so the accuracy of image registrations is improved. On this basis, affine transform is chosen to complement the non-rigid registration, and normalized mutual information measure and PSO optimization algorithm are also chosen to optimize the registration process. The experimental results show that the method can achieve better registration results than the method based on mutual information.

  1. Feature-point-extracting-based automatically mosaic for composite microscopic images

    Institute of Scientific and Technical Information of China (English)

    YIN YanSheng; ZHAO XiuYang; TIAN XiaoFeng; LI Jia

    2007-01-01

    Image mosaic is a crucial step in the three-dimensional reconstruction of composite materials to align the serial images. A novel method is adopted to mosaic two SiC/Al microscopic images with an amplification coefficient of 1000. The two images are denoised by Gaussian model, and feature points are then extracted by using Harris corner detector. The feature points are filtered through Canny edge detector. A 40x40 feature template is chosen by sowing a seed in an overlapped area of the reference image, and the homologous region in floating image is acquired automatically by the way of correlation analysis. The feature points in matched templates are used as feature point-sets. Using the transformational parameters acquired by SVD-ICP method, the two images are transformed into the universal coordinates and merged to the final mosaic image.

  2. Resemblance Coefficient Based Intrapulse Feature Extraction Approach for Radar Emitter Signals

    Institute of Scientific and Technical Information of China (English)

    ZHANGGexiang; JINWeidong; HULaizhao

    2005-01-01

    Radar emitter signal recognition plays an important role in electronic intelligence and electronic support measure systems. To enhance accurate recognition rate of advanced radar emitter signals to meet the requirement of modern electronic warfare, Resemblance coefficient (RC) approach is proposed to extract features from radar emitter signals with different intrapulse modulation laws. Definition of RC is given. Properties and advantages of RC are analyzed. Feature extraction algorithm using RC is described in detail. The noise-suppression performances of RC features are also analyzed. Subsequently, neural networks are used to design classifiers. Because RC contains the change and distribution information of amplitude, phase and frequency of radar emitter signals, RC can reflect the intrapulse modulation laws effectively. The results of theoretical analysis and simulation experiments show that RC features have good characteristic of not being sensitive to noise. 9 radar emitter signals are chosen to make the experiment of RC feature extraction and automatic recognition. A large number of experimental results show that good accurate recognition rate can be achieved using the proposed approach. It is proved to be a valid and practical approach.

  3. Feature extraction and analysis of online reviews for the recommendation of books using opinion mining technique

    Directory of Open Access Journals (Sweden)

    Shahab Saquib Sohail

    2016-09-01

    Full Text Available The customer's review plays an important role in deciding the purchasing behaviour for online shopping as a customer prefers to get the opinion of other customers by observing their opinion through online products’ reviews, blogs and social networking sites, etc. The customer's reviews reflect the customer's sentiments and have a substantial significance for the products being sold online including electronic gadgets, movies, house hold appliances and books. Hence, extracting the exact features of the products by analyzing the text of reviews requires a lot of efforts and human intelligence. In this paper we intend to analyze the online reviews available for books and extract book-features from the reviews using human intelligence. We have proposed a technique to categorize the features of books from the reviews of the customers. The extracted features may help in deciding the books to be recommended for readers. The ultimate goal of the work is to fulfil the requirement of the user and provide them their desired books. Thus, we have evaluated our categorization method by users themselves, and surveyed qualified persons for the concerned books. The survey results show high precision of the features categorized which clearly indicates that proposed method is very useful and appealing. The proposed technique may help in recommending the best books for concerned people and may also be generalized to recommend any product to the users.

  4. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  5. Capacitive ECG system with direct access to standard leads and body surface potential mapping.

    Science.gov (United States)

    Oehler, Martin; Schilling, Meinhard; Esperer, Hans Dieter

    2009-12-01

    Capacitive electrodes provide the same access to the human electrocardiogram (ECG) as galvanic electrodes, but without the need of direct electrical skin contact and even through layers of clothing. Thus, potential artifacts as a result of poor electrode contact to the skin are avoided and preparation time is significantly reduced. Our system integrates such capacitive electrodes in a 15 sensor array, which is combined with a Tablet PC. This integrated lightweight ECG system (cECG) is easy to place on the chest wall and allows for simultaneous recordings of 14 ECG channels, even if the patient is slightly dressed, e.g., with a t-shirt. In this paper, we present preliminary results on the performance of the cECG regarding the capability of recording body surface potential maps (BSPMs) and obtaining reconstructed standard ECG leads including Einthoven, Goldberger and, with some limitations, Wilson leads. All signals were measured having the subject lie in a supine position and wear a cotton shirt. Signal quality and diagnostic ECG information of the extracted leads are compared with standard ECG measurements. The results show a very close correlation between both types of ECG measurements. It is concluded that the cECG lends itself to rapid screening in clinically unstable patients.

  6. Vibration Feature Extraction and Analysis for Fault Diagnosis of Rotating Machinery-A Literature Survey

    Directory of Open Access Journals (Sweden)

    Saleem Riaz

    2017-02-01

    Full Text Available Safety, reliability, efficiency and performance of rotating machinery in all industrial applications are the main concerns. Rotating machines are widely used in various industrial applications. Condition monitoring and fault diagnosis of rotating machinery faults are very important and often complex and labor-intensive. Feature extraction techniques play a vital role for a reliable, effective and efficient feature extraction for the diagnosis of rotating machinery. Therefore, developing effective bearing fault diagnostic method using different fault features at different steps becomes more attractive. Bearings are widely used in medical applications, food processing industries, semi-conductor industries, paper making industries and aircraft components. This paper review has demonstrated that the latest reviews applied to rotating machinery on the available a variety of vibration feature extraction. Generally literature is classified into two main groups: frequency domain, time frequency analysis. However, fault detection and diagnosis of rotating machine vibration signal processing methods to present their own limitations. In practice, most healthy ingredients faulty vibration signal from background noise and mechanical vibration signals are buried. This paper also reviews that how the advanced signal processing methods, empirical mode decomposition and interference cancellation algorithm has been investigated and developed. The condition for rotating machines based rehabilitation, prevent failures increase the availability and reduce the cost of maintenance is becoming necessary too. Rotating machine fault detection and diagnostics in developing algorithms signal processing based on a key problem is the fault feature extraction or quantification. Currently, vibration signal, fault detection and diagnosis of rotating machinery based techniques most widely used techniques. Furthermore, the researchers are widely interested to make automatic

  7. Image quality assessment method based on nonlinear feature extraction in kernel space

    Institute of Scientific and Technical Information of China (English)

    Yong DING‡; Nan LI; Yang ZHAO; Kai HUANG

    2016-01-01

    To match human perception, extracting perceptual features effectively plays an important role in image quality assessment. In contrast to most existing methods that use linear transformations or models to represent images, we employ a complex mathematical expression of high dimensionality to reveal the statistical characteristics of the images. Furthermore, by introducing kernel methods to transform the linear problem into a nonlinear one, a full-reference image quality assessment method is proposed based on high-dimensional nonlinear feature extraction. Experiments on the LIVE, TID2008, and CSIQ databases demonstrate that nonlinear features offer competitive performance for image inherent quality representation and the proposed method achieves a promising performance that is consistent with human subjective evaluation.

  8. Micro-Doppler Feature Extraction and Recognition Based on Netted Radar for Ballistic Targets

    Directory of Open Access Journals (Sweden)

    Feng Cun-qian

    2015-12-01

    Full Text Available This study examines the complexities of using netted radar to recognize and resolve ballistic midcourse targets. The application of micro-motion feature extraction to ballistic mid-course targets is analyzed, and the current status of application and research on micro-motion feature recognition is concluded for singlefunction radar networks such as low- and high-resolution imaging radar networks. Advantages and disadvantages of these networks are discussed with respect to target recognition. Hybrid-mode radar networks combine low- and high-resolution imaging radar and provide a specific reference frequency that is the basis for ballistic target recognition. Main research trends are discussed for hybrid-mode networks that apply micromotion feature extraction to ballistic mid-course targets.

  9. Extraction of ABCD rule features from skin lesions images with smartphone.

    Science.gov (United States)

    Rosado, Luís; Castro, Rui; Ferreira, Liliana; Ferreira, Márcia

    2012-01-01

    One of the greatest challenges in dermatology today is the early detection of melanoma since the success rates of curing this type of cancer are very high if detected during the early stages of its development. The main objective of the work presented in this paper is to create a prototype of a patient-oriented system for skin lesion analysis using a smartphone. This work aims at implementing a self-monitoring system that collects, processes, and stores information of skin lesions through the automatic extraction of specific visual features. The selection of the features was based on the ABCD rule, which considers 4 visual criteria considered highly relevant for the detection of malignant melanoma. The algorithms used to extract these features are briefly described and the results achieved using images taken from the smartphone camera are discussed.

  10. Comparative Analysis of Feature Extraction Methods for the Classification of Prostate Cancer from TRUS Medical Images

    Directory of Open Access Journals (Sweden)

    Manavalan Radhakrishnan

    2012-01-01

    Full Text Available Diagnosing Prostate cancer is a challenging task for Urologists, Radiologists, and Oncologists. Ultrasound imaging is one of the hopeful techniques used for early detection of prostate cancer. The Region of interest (ROI is identified by different methods after preprocessing. In this paper, DBSCAN clustering with morphological operators is used to extort the prostate region. The evaluation of texture features is important for several image processing applications. The performance of the features extracted from the various texture methods such as histogram, Gray Level Cooccurrence Matrix (GLCM, Gray-Level Run-Length Matrix (GRLM, are analyzed separately. In this paper, it is proposed to combine histogram, GLRLM and GLCM in order to study the performance. The Support Vector Machine (SVM is adopted to classify the extracted features into benign or malignant. The performance of texture methods are evaluated using various statistical parameters such as sensitivity, specificity and accuracy. The comparative analysis has been performed over 5500 digitized TRUS images of prostate.

  11. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources.

    Science.gov (United States)

    Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2015-09-01

    Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All

  12. Feature Extraction for the Analysis of Multi-Channel EEG Signals Using Hilbert- Huang Technique

    Directory of Open Access Journals (Sweden)

    Mahipal Singh

    2016-02-01

    Full Text Available This research article seeks to propose a Hilbert-Huang transform (HHT based novel feature extraction approach for the analysis of multi-channel EEG signals using its local time scale features. The applicability of this recently developed HHT based new features has been investigated in the analysis of multi-channel EEG signals for classifying a small set of non-motor cognitive task. HHT is combination of multivariate empirical mode decomposition (MEMD and Hilbert transform (HT. At the first stage, multi-channel EEG signals (6 channels per trial per task per subject corresponding to a small set of nonmotor mental task were decomposed by using MEMD algorithm. This gives rise to adaptive i.e. data driven decomposition of the data into twelve mono component oscillatory modes known as intrinsic mode functions (IMFs and one residue function. These generated intrinsic mode functions (IMFs are multivariate i.e. mode aligned and narrowband. From the generated IMFs, most sensitive IMF has been chosen by analysing their power spectrum. Since IMFs are amplitude and frequency modulated, the chosen IMF has been analysed through their instantaneous amplitude (IA and instantaneous frequency (IF i.e. local features extracted by applying Hilbert transform on them. Finally, the discriminatory power of these local features has been investigated through statistical significance test using paired t-test. The analysis results clearly support the potential of these local features for classifying different cognitive task in EEG based Brain –Computer Interface (BCI system.

  13. Hardwood species classification with DWT based hybrid texture feature extraction techniques

    Indian Academy of Sciences (India)

    Arvind R Yadav; R S Anand; M L Dewal; Sangeeta Gupta

    2015-12-01

    In this work, discrete wavelet transform (DWT) based hybrid texture feature extraction techniques have been used to categorize the microscopic images of hardwood species into 75 different classes. Initially, the DWT has been employed to decompose the image up to 7 levels using Daubechies (db3) wavelet as decomposition filter. Further, first-order statistics (FOS) and four variants of local binary pattern (LBP) descriptors are used to acquire distinct features of these images at various levels. The linear support vector machine (SVM), radial basis function (RBF) kernel SVM and random forest classifiers have been employed for classification. The classification accuracy obtained with state-of-the-art and DWT based hybrid texture features using various classifiers are compared. The DWT based FOS-uniform local binary pattern (DWTFOSLBPu2) texture features at the 4th level of image decomposition have produced best classification accuracy of 97.67 ± 0.79% and 98.40 ± 064% for grayscale and RGB images, respectively, using linear SVM classifier. Reduction in feature dataset by minimal redundancy maximal relevance (mRMR) feature selection method is achieved and the best classification accuracy of 99.00 ± 0.79% and 99.20 ± 0.42% have been obtained for DWT based FOS-LBP histogram Fourier features (DWTFOSLBP-HF) technique at the 5th and 6th levels of image decomposition for grayscale and RGB images, respectively, using linear SVM classifier. The DWTFOSLBP-HF features selected with mRMR method has also established superiority amongst the DWT based hybrid texture feature extraction techniques for randomly divided database into different proportions of training and test datasets.

  14. Classification of osteosarcoma T-ray responses using adaptive and rational wavelets for feature extraction

    Science.gov (United States)

    Ng, Desmond; Wong, Fu Tian; Withayachumnankul, Withawat; Findlay, David; Ferguson, Bradley; Abbott, Derek

    2007-12-01

    In this work we investigate new feature extraction algorithms on the T-ray response of normal human bone cells and human osteosarcoma cells. One of the most promising feature extraction methods is the Discrete Wavelet Transform (DWT). However, the classification accuracy is dependant on the specific wavelet base chosen. Adaptive wavelets circumvent this problem by gradually adapting to the signal to retain optimum discriminatory information, while removing redundant information. Using adaptive wavelets, classification accuracy, using a quadratic Bayesian classifier, of 96.88% is obtained based on 25 features. In addition, the potential of using rational wavelets rather than the standard dyadic wavelets in classification is explored. The advantage it has over dyadic wavelets is that it allows a better adaptation of the scale factor according to the signal. An accuracy of 91.15% is obtained through rational wavelets with 12 coefficients using a Support Vector Machine (SVM) as the classifier. These results highlight adaptive and rational wavelets as an efficient feature extraction method and the enormous potential of T-rays in cancer detection.

  15. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  16. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    Science.gov (United States)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  17. Manifold Learning with Self-Organizing Mapping for Feature Extraction of Nonlinear Faults in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Lin Liang

    2015-01-01

    Full Text Available A new method for extracting the low-dimensional feature automatically with self-organization mapping manifold is proposed for the detection of rotating mechanical nonlinear faults (such as rubbing, pedestal looseness. Under the phase space reconstructed by single vibration signal, the self-organization mapping (SOM with expectation maximization iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment algorithm is adopted to compress the high-dimensional phase space into low-dimensional feature space. The proposed method takes advantages of the manifold learning in low-dimensional feature extraction and adaptive neighborhood construction of SOM and can extract intrinsic fault features of interest in two dimensional projection space. To evaluate the performance of the proposed method, the Lorenz system was simulated and rotation machinery with nonlinear faults was obtained for test purposes. Compared with the holospectrum approaches, the results reveal that the proposed method is superior in identifying faults and effective for rotating machinery condition monitoring.

  18. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  19. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    Science.gov (United States)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  20. ROAD AND ROADSIDE FEATURE EXTRACTION USING IMAGERY AND LIDAR DATA FOR TRANSPORTATION OPERATION

    Directory of Open Access Journals (Sweden)

    S. Ural

    2015-03-01

    Full Text Available Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  1. Intelligibility Evaluation of Pathological Speech through Multigranularity Feature Extraction and Optimization

    Science.gov (United States)

    Ma, Lin; Zhang, Mancai

    2017-01-01

    Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S-transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S-transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation. PMID:28194222

  2. A tri-gram based feature extraction technique using linear probabilities of position specific scoring matrix for protein fold recognition.

    Science.gov (United States)

    Paliwal, Kuldip K; Sharma, Alok; Lyons, James; Dehzangi, Abdollah

    2014-03-01

    In biological sciences, the deciphering of a three dimensional structure of a protein sequence is considered to be an important and challenging task. The identification of protein folds from primary protein sequences is an intermediate step in discovering the three dimensional structure of a protein. This can be done by utilizing feature extraction technique to accurately extract all the relevant information followed by employing a suitable classifier to label an unknown protein. In the past, several feature extraction techniques have been developed but with limited recognition accuracy only. In this study, we have developed a feature extraction technique based on tri-grams computed directly from Position Specific Scoring Matrices. The effectiveness of the feature extraction technique has been shown on two benchmark datasets. The proposed technique exhibits up to 4.4% improvement in protein fold recognition accuracy compared to the state-of-the-art feature extraction techniques.

  3. Extraction of Informative Blocks from Deep Web Page Using Similar Layout Feature

    OpenAIRE

    Zeng,Jun; Flanagan, Brendan; Hirokawa, Sachio

    2013-01-01

    Due to the explosive growth and popularity of the deep web, information extraction from deep web page has gained more and more attention. However, the HTML structure of web page has become more complicated, making it difficult to recognize target content by only analyzing the HTML source code. In this paper, we propose a method to extract the informative blocks from a deep web using the layout feature. We consider the visual rectangular region of an HTML element as a visual block in web page....

  4. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals

    Science.gov (United States)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L.

    2016-05-01

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  5. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals.

    Science.gov (United States)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L

    2016-04-17

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  6. Human action classification using adaptive key frame interval for feature extraction

    Science.gov (United States)

    Lertniphonphan, Kanokphan; Aramvith, Supavadee; Chalidabhongse, Thanarat H.

    2016-01-01

    Human action classification based on the adaptive key frame interval (AKFI) feature extraction is presented. Since human movement periods are different, the action intervals that contain the intensive and compact motion information are considered in this work. We specify AKFI by analyzing an amount of motion through time. The key frame is defined to be the local minimum interframe motion, which is computed by using frame differencing between consecutive frames. Once key frames are detected, the features within a segmented period are encoded by adaptive motion history image and key pose history image. The action representation consists of the local orientation histogram of the features during AKFI. The experimental results on Weizmann dataset, KTH dataset, and UT Interaction dataset demonstrate that the features can effectively classify action and can classify irregular cases of walking compared to other well-known algorithms.

  7. Research on Feature Extraction of Composite Pseudocode Phase Modulation-Carrier Frequency Modulation Signal Based on PWD Transform

    Institute of Scientific and Technical Information of China (English)

    LI Ming-zi; ZHAO Hui-chang

    2008-01-01

    The identification features of composite pseudocode phase modulation and carry frequency modulation signal in-clude pseudocode and modulation frequency. In this paper, PWD is used to extract these features. First, the feature of pseudocode is extracted using the amplitude output of PWD and the correlation filter technology. Then the feature of fre-quency modulation is extracted by way of PWD analysis on the signal processed by anti-phase operation according to the extracted feature of pseudo code, i.e. position information of changed abruptly point of phase. The simulation result shows that both the features of frequency modulation and phase change position caused by the pseudocode phase modula-tion can be extracted effectively for SNR = 3 dB.

  8. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    Science.gov (United States)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  9. Evolving a Bayesian Classifier for ECG-based Age Classification in Medical Applications.

    Science.gov (United States)

    Wiggins, M; Saad, A; Litt, B; Vachtsevanos, G

    2008-01-01

    OBJECTIVE: To classify patients by age based upon information extracted from their electro-cardiograms (ECGs). To develop and compare the performance of Bayesian classifiers. METHODS AND MATERIAL: We present a methodology for classifying patients according to statistical features extracted from their ECG signals using a genetically evolved Bayesian network classifier. Continuous signal feature variables are converted to a discrete symbolic form by thresholding, to lower the dimensionality of the signal. This simplifies calculation of conditional probability tables for the classifier, and makes the tables smaller. Two methods of network discovery from data were developed and compared: the first using a greedy hill-climb search and the second employed evolutionary computing using a genetic algorithm (GA). RESULTS AND CONCLUSIONS: The evolved Bayesian network performed better (86.25% AUC) than both the one developed using the greedy algorithm (65% AUC) and the naïve Bayesian classifier (84.75% AUC). The methodology for evolving the Bayesian classifier can be used to evolve Bayesian networks in general thereby identifying the dependencies among the variables of interest. Those dependencies are assumed to be non-existent by naïve Bayesian classifiers. Such a classifier can then be used for medical applications for diagnosis and prediction purposes.

  10. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

    Science.gov (United States)

    Rajashekararadhya, S. V.; Ranjan, P. Vanaja

    India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

  11. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  12. Nonparametric Single-Trial EEG Feature Extraction and Classification of Driver's Cognitive Responses

    Directory of Open Access Journals (Sweden)

    I-Fang Chung

    2008-05-01

    Full Text Available We proposed an electroencephalographic (EEG signal analysis approach to investigate the driver's cognitive response to traffic-light experiments in a virtual-reality-(VR- based simulated driving environment. EEG signals are digitally sampled and then transformed by three different feature extraction methods including nonparametric weighted feature extraction (NWFE, principal component analysis (PCA, and linear discriminant analysis (LDA, which were also used to reduce the feature dimension and project the measured EEG signals to a feature space spanned by their eigenvectors. After that, the mapped data could be classified with fewer features and their classification results were compared by utilizing two different classifiers including k nearest neighbor classification (KNNC and naive bayes classifier (NBC. Experimental data were collected from 6 subjects and the results show that NWFE+NBC gives the best classification accuracy ranging from 71%∼77%, which is over 10%∼24% higher than LDA+KNN1. It also demonstrates the feasibility of detecting and analyzing single-trial EEG signals that represent operators' cognitive states and responses to task events.

  13. Nonparametric Single-Trial EEG Feature Extraction and Classification of Driver's Cognitive Responses

    Science.gov (United States)

    Lin, Chin-Teng; Lin, Ken-Li; Ko, Li-Wei; Liang, Sheng-Fu; Kuo, Bor-Chen; Chung, I.-Fang

    2008-12-01

    We proposed an electroencephalographic (EEG) signal analysis approach to investigate the driver's cognitive response to traffic-light experiments in a virtual-reality-(VR-) based simulated driving environment. EEG signals are digitally sampled and then transformed by three different feature extraction methods including nonparametric weighted feature extraction (NWFE), principal component analysis (PCA), and linear discriminant analysis (LDA), which were also used to reduce the feature dimension and project the measured EEG signals to a feature space spanned by their eigenvectors. After that, the mapped data could be classified with fewer features and their classification results were compared by utilizing two different classifiers including [InlineEquation not available: see fulltext.] nearest neighbor classification (KNNC) and naive bayes classifier (NBC). Experimental data were collected from 6 subjects and the results show that NWFE+NBC gives the best classification accuracy ranging from [InlineEquation not available: see fulltext.], which is over [InlineEquation not available: see fulltext.] higher than LDA+KNN1. It also demonstrates the feasibility of detecting and analyzing single-trial EEG signals that represent operators' cognitive states and responses to task events.

  14. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  15. Aerial visible-thermal infrared hyperspectral feature extraction technology and its application to object identification

    Science.gov (United States)

    Jie-lin, Zhang; Jun-hu, Wang; Mi, Zhou; Yan-ju, Huang; Ding, Wu

    2014-03-01

    Based on aerial visible-thermal infrared hyperspectral imaging system (CASI/SASI/TASI) data, field spectrometer data and multi-source geological information, this paper utilizes the hyperspectral data processing and feature extraction technology to identify uranium mineralization factors, the spectral features of typical tetravalent, hexavalent uranium minerals and mineralization factors are established, and hyperspectral logging technology for drill cores and trench also are developed, the relationships between radioactive intensity and spectral characteristics are built. Above methods have been applied to characterize uranium mineralization setting of granite-type and sandstone-type uranium deposits in south and northwest China, the successful outcomes of uranium prospecting have been achieved.

  16. Wavelet packet based feature extraction and recognition of license plate characters

    Institute of Scientific and Technical Information of China (English)

    HUANG Wei; LU Xiaobo; LING Xiaojing

    2005-01-01

    To study the characteristics of license plate characters recognition, this paper proposes a method for feature extraction of license plate characters based on two-dimensional wavelet packet. We decompose license plate character images with two dimensional-wavelet packet and search for the optimal wavelet packet basis. This paper presents a criterion of searching for the optimal wavelet packet basis, and a practical algorithm. The obtained optimal wavelet packet basis is used as the feature of license plate character, and a BP neural network is used to classify the character.The testing results show that the proposed method achieved higher recognition rate than the traditional methods.

  17. An efficient approach of EEG feature extraction and classification for brain computer interface

    Institute of Scientific and Technical Information of China (English)

    Wu Ting; Yan Guozheng; Yang Banghua

    2009-01-01

    In the study of brain-computer interfaces, a method of feature extraction and classification used for two kinds of imaginations is proposed. It considers Euclidean distance between mean traces recorded from the channels with two kinds of imaginations as a feature, and determines imagination classes using threshold value. It analyzed the background of experiment and theoretical foundation referring to the data sets of BCI 2003, and compared the classification precision with the best result of the competition. The result shows that the method has a high precision and is advantageous for being applied to practical systems.

  18. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    Science.gov (United States)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  19. Transverse beam splitting made operational: Key features of the multiturn extraction at the CERN Proton Synchrotron

    Directory of Open Access Journals (Sweden)

    A. Huschauer

    2017-06-01

    Full Text Available Following a successful commissioning period, the multiturn extraction (MTE at the CERN Proton Synchrotron (PS has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.

  20. Application of Wavelet Packet Energy Spectrum to Extract the Feature of the Pulse Signal

    Institute of Scientific and Technical Information of China (English)

    Dian-guo CAO; Yu-qiang WU; Xue-wen SHI; Peng WANG

    2010-01-01

    The wavelet packet is presented as a new kind of multi-scale analysis technique followed by Wavelet analysis. The fundamental and realization arithmetic of the wavelet packet analysis method are described in this paper. A new application approach of the wavelet packed method to extract the feature of the pulse signal from energy distributing angle is expatiated. It is convenient for the microchip to process and judge by using the wavelet packet analysis method to make the pulse signals quantized and analyzed. Kinds of experiments are simulated in the lab, and the experiments prove that it is a convenient and accurate method to extract the feature of the pulse signal based on wavelet packed-energy spectrum analysis.

  1. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  2. Feature Extraction of Localized Scattering Centers Using the Modified TLS-Prony Algorithm and Its Applications

    Institute of Scientific and Technical Information of China (English)

    王军

    2002-01-01

    This paper presents an all-parametric model of radar target in optic region, in which the localized scattering center's frequency and aspect angle dependent scattering level, distance and azimuth locations are modeled as the feature vectors. And the traditional TLS-Prony algorithm is modified to extract these feature vectors. The analysis of CramerRao bound shows that the modified algorithm not only improves the restriction of high signal-to-noise ratio (SNR)threshold of traditional TLS-Prony algorithm, but also is suitable to the extraction of big damped coefficients and highresolution estimation of near separation poles. Finally, an illustrative example is presented to verify its practicability in the applications. The experimental results show that the method developed can not only recognize two airplane-like targets with similar shape at low SNR, but also compress the original radar data with high fidelity.

  3. THE MORPHOLOGICAL PYRAMID AND ITS APPLICATIONS TO REMOTE SENSING: MULTIRESOLUTION DATA ANALYSIS AND FEATURES EXTRACTION

    Directory of Open Access Journals (Sweden)

    Laporterie Florence

    2011-05-01

    Full Text Available In remote sensing, sensors are more and more numerous, and their spatial resolution is higher and higher. Thus, the availability of a quick and accurate characterisation of the increasing amount of data is now a quite important issue. This paper deals with an approach combining a pyramidal algorithm and mathematical morphology to study the physiographic characteristics of terrestrial ecosystems. Our pyramidal strategy involves first morphological filters, then extraction at each level of resolution of well-known landscapes features. The approach is applied to a digitised aerial photograph representing an heterogeneous landscape of orchards and forests along the Garonne river (France. This example, simulating very high spatial resolution imagery, highlights the influence of the parameters of the pyramid according to the spatial properties of the studied patterns. It is shown that, the morphological pyramid approach is a promising attempt for multi-level features extraction by modelling geometrical relevant parameters.

  4. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    Science.gov (United States)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  5. Performance Evaluation of Conventional and Hybrid Feature Extractions Using Multivariate HMM Classifier

    Directory of Open Access Journals (Sweden)

    Veton Z. Këpuska

    2015-04-01

    Full Text Available Speech feature extraction and likelihood evaluation are considered the main issues in speech recognition system. Although both techniques were developed and improved, but they remain the most active area of research. This paper investigates the performance of conventional and hybrid speech feature extraction algorithm of Mel Frequency Cepstrum Coefficient (MFCC, Linear Prediction Cepstrum Coefficient (LPCC, perceptual linear production (PLP and RASTA-PLP through using multivariate Hidden Markov Model (HMM classifier. The performance of the speech recognition system is evaluated based on word error rate (WER, which is given for different data set of human voice using isolated speech TIDIGIT corpora sampled by 8 Khz. This data includes the pronunciation of eleven words (zero to nine plus oh are recorded from 208 different adult speakers (men & women each person uttered each word 2 times.

  6. Special object extraction from medieval books using superpixels and bag-of-features

    Science.gov (United States)

    Yang, Ying; Rushmeier, Holly

    2017-01-01

    We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.

  7. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  8. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    Energy Technology Data Exchange (ETDEWEB)

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  9. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    Science.gov (United States)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  10. Effects of LiDAR Derived DEM Resolution on Hydrographic Feature Extraction

    Science.gov (United States)

    Yang, P.; Ames, D. P.; Glenn, N. F.; Anderson, D.

    2010-12-01

    This paper examines the effect of LiDAR-derived digital elevation model (DEM) resolution on digitally extracted stream networks with respect to known stream channel locations. Two study sites, Reynolds Creek Experimental Watershed (RCEW) and Dry Creek Experimental Watershed (DCEW), which represent terrain characteristics for lower and intermediate elevation mountainous watersheds in the Intermountain West, were selected as study areas for this research. DEMs reflecting bare earth ground were created from the LiDAR observations at a series of raster cell sizes (from 1 m to 60 m) using spatial interpolation techniques. The effect of DEM resolution on resulting hydrographic feature (specifically stream channel) derivation was studied. Stream length, watershed area, and sinuosity were explored at each of the raster cell sizes. Also, variation from known channel location as estimated by root mean square error (RMSE) between surveyed channel location and extracted channel was computed for each of the DEMs and extracted stream networks. As expected, the results indicate that the DEM based hydrographic extraction process provides more detailed hydrographic features at a finer resolution. RMSE between the known channel location and modeled locations generally increased with larger cell size DEM with a greater effect in the larger RCEW. Sensitivity analyses on sinuosity demonstrated that the resulting shape of streams obtained from LiDAR data matched best with the reference data at an intermediate cell size instead of highest resolution, which is at a range of cell size from 5 to 10 m likely due to original point spacing, terrain characteristics, and LiDAR noise influence. More importantly, the absolute sinuosity deviation displayed a smallest value at the cell size of 10 m in both experimental watersheds, which suggests that optimal cell size for LiDAR-derived DEMs used for hydrographic feature extraction is 10 m.

  11. Vaccine adverse event text mining system for extracting features from vaccine safety reports.

    Science.gov (United States)

    Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert

    2012-01-01

    To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.

  12. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    OpenAIRE

    Qingqing Lu; Jiexin Pu; Zhonghua Liu

    2014-01-01

    Ground penetrating radar (GPR) is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT) transforms A-Scan data and approximation coefficient...

  13. Extracting features for power system vulnerability assessment from wide-area measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kamwa, I. [Hydro-Quebec, Varennes, PQ (Canada). IREQ; Pradhan, A.; Joos, G. [McGill Univ., Montreal, PQ (Canada)

    2006-07-01

    Many power systems now operate close to their stability limits as a result of deregulation. Some utilities have chosen to install phason measurement units (PMUs) to monitor power system dynamics. The synchronized phasors of different areas of power systems available through a wide-area measurement system (WAMS) are expected to provide an effective security assessment tool as well as a stabilizing control action for inter-area oscillations and a system protection scheme (SPS) to evade possible blackouts. This paper presented tool extracting features for vulnerability assessment from WAMS-data. A Fourier-transform based technique was proposed for monitoring inter-area oscillations. FFT, wavelet transform and curve fitting approaches were investigated to analyze oscillatory signals. A dynamic voltage stability prediction algorithm was proposed for control action. An integrated framework was then proposed to assess a power system through extracted features from WAMS-data on first swing stability, voltage stability and inter-area oscillations. The centre of inertia (COI) concept was applied to the angle of voltage phasor. Prony analysis was applied to filtered signals to extract the damping coefficients. The minimum post-fault voltage of an area was considered for voltage stability, and an algorithm was used to monitor voltage stability issues. A data clustering technique was applied to classify the features in a group for improved system visualization. The overall performance of the technique was examined using a 67-bus system with 38 PMUs. The method used to extract features from both frequency and time domain analysis was provided. The test power system was described. The results of 4 case studies indicated that adoption of the method will be beneficial for system operators. 13 refs., 2 tabs., 13 figs.

  14. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  15. Feature extraction and analysis of online reviews for the recommendation of books using opinion mining technique

    OpenAIRE

    Shahab Saquib Sohail; Jamshed Siddiqui; Rashid Ali

    2016-01-01

    The customer's review plays an important role in deciding the purchasing behaviour for online shopping as a customer prefers to get the opinion of other customers by observing their opinion through online products’ reviews, blogs and social networking sites, etc. The customer's reviews reflect the customer's sentiments and have a substantial significance for the products being sold online including electronic gadgets, movies, house hold appliances and books. Hence, extracting the exact featur...

  16. Improving Identification of Area Targets by Integrated Analysis of Hyperspectral Data and Extracted Texture Features

    Science.gov (United States)

    2012-09-01

    Imaging Spectrometer B Blue CA California FWHM Full Width Half Max G Green GIS Geographic Information System GLCM Gray Level Co-occurrence... GLCM ). From this GLCM the quantities known as texture features are extracted. The textures studied in his landmark paper were: angular second...defines the number of surrounding pixels that are used to create the GLCM . A 3x3 window would only include the 8 pixels immediately adjacent to the

  17. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  18. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    Science.gov (United States)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  19. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    Science.gov (United States)

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  20. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  1. The Fault Feature Extraction of Rolling Bearing Based on EMD and Difference Spectrum of Singular Value

    Directory of Open Access Journals (Sweden)

    Te Han

    2016-01-01

    Full Text Available Nowadays, the fault diagnosis of rolling bearing in aeroengines is based on the vibration signal measured on casing, instead of bearing block. However, the vibration signal of the bearing is often covered by a series of complex components caused by other structures (rotor, gears. Therefore, when bearings cause failure, it is still not certain that the fault feature can be extracted from the vibration signal on casing. In order to solve this problem, a novel fault feature extraction method for rolling bearing based on empirical mode decomposition (EMD and the difference spectrum of singular value is proposed in this paper. Firstly, the vibration signal is decomposed by EMD. Next, the difference spectrum of singular value method is applied. The study finds that each peak on the difference spectrum corresponds to each component in the original signal. According to the peaks on the difference spectrum, the component signal of the bearing fault can be reconstructed. To validate the proposed method, the bearing fault data collected on the casing are analyzed. The results indicate that the proposed rolling bearing diagnosis method can accurately extract the fault feature that is submerged in other component signals and noise.

  2. Applications of Wigner high-order spectra in feature extraction of acoustic emission signals

    Institute of Scientific and Technical Information of China (English)

    Xiao Siwen; Liao Chuanjun; Li Xuejun

    2009-01-01

    The characteristics of typical AE signals initiated by mechanical component damages are analyzed. Based on the extracting principle of acoustic emission(AE) signals from damaged components, the paper introduces Wigner high-order spectra to the field of feature extraction and fault diagnosis of AE signals. Some main performances of Wigner bi-nary spectra, Wigner triple spectra and Wigner-Ville distribution (WVD) are discussed, including of time-frequency resolution, energy accumulation, reduction of crossing items and noise elimination. Wigncr triple spectra is employed to the fault diagnosis of rolling bearings with AE techniques. The fault features reading from experimental data analysis are clear, accurate and intuitionistic. The validity and accuracy of Wigner high-order spectra methods proposed agree quite well with simulation results. Simulation and research results indicate that wigncr high-order spectra is quite useful for condition monitoring and fault diagnosis in conjunction with AE technique, and has very important research and applica-tion values in feature extraction and faults diagnosis based on AE signals due to mechanical component damages.

  3. An Advanced Approach to Extraction of Colour Texture Features Based on GLCM

    Directory of Open Access Journals (Sweden)

    Miroslav Benco

    2014-07-01

    Full Text Available This paper discusses research in the area of texture image classification. More specifically, the combination of texture and colour features is researched. The principle objective is to create a robust descriptor for the extraction of colour texture features. The principles of two well-known methods for grey- level texture feature extraction, namely GLCM (grey- level co-occurrence matrix and Gabor filters, are used in experiments. For the texture classification, the support vector machine is used. In the first approach, the methods are applied in separate channels in the colour image. The experimental results show the huge growth of precision for colour texture retrieval by GLCM. Therefore, the GLCM is modified for extracting probability matrices directly from the colour image. The method for 13 directions neighbourhood system is proposed and formulas for probability matrices computation are presented. The proposed method is called CLCM (colour-level co-occurrence matrices and experimental results show that it is a powerful method for colour texture classification.

  4. Improved method for the feature extraction of laser scanner using genetic clustering

    Institute of Scientific and Technical Information of China (English)

    Yu Jinxia; Cai Zixing; Duan Zhuohua

    2008-01-01

    Feature extraction of range images provided by ranging sensor is a key issue of pattern recognition. To automatically extract the environmental feature sensed by a 2D ranging sensor laser scanner, an improved method based on genetic clustering VGA-clustering is presented. By integrating the spatial neighbouring information of range data into fuzzy clustering algorithm, a weighted fuzzy clustering algorithm (WFCA) instead of standard clustering algorithm is introduced to realize feature extraction of laser scanner. Aimed at the unknown clustering number in advance, several validation index functions are used to estimate the validity of different clustering al-gorithms and one validation index is selected as the fitness function of genetic algorithm so as to determine the accurate clustering number automatically. At the same time, an improved genetic algorithm IVGA on the basis of VGA is proposed to solve the local optimum of clustering algorithm, which is implemented by increasing the population diversity and improving the genetic operators of elitist rule to enhance the local search capacity and to quicken the convergence speed. By the comparison with other algorithms, the effectiveness of the algorithm introduced is demonstrated.

  5. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    Directory of Open Access Journals (Sweden)

    Zaichao Ma

    2015-04-01

    Full Text Available Empirical Mode Decomposition (EMD, due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD, has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD integrating Phase Space Reconstruction (PSR and Manifold Learning (ML for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise.

  6. Dermoscopic diagnosis of melanoma in a 4D space constructed by active contour extracted features.

    Science.gov (United States)

    Mete, Mutlu; Sirakov, Nikolay Metodiev

    2012-10-01

    Dermoscopy, also known as epiluminescence microscopy, is a major imaging technique used in the assessment of melanoma and other diseases of skin. In this study we propose a computer aided method and tools for fast and automated diagnosis of malignant skin lesions using non-linear classifiers. The method consists of three main stages: (1) skin lesion features extraction from images; (2) features measurement and digitization; and (3) skin lesion binary diagnosis (classification), using the extracted features. A shrinking active contour (S-ACES) extracts color regions boundaries, the number of colors, and lesion's boundary, which is used to calculate the abrupt boundary. Quantification methods for measurements of asymmetry and abrupt endings in skin lesions are elaborated to approach the second stage of the method. The total dermoscopy score (TDS) formula of the ABCD rule is modeled as linear support vector machines (SVM). Further a polynomial SVM classifier is developed. To validate the proposed framework a dataset of 64 lesion images were selected from a collection with a ground truth. The lesions were classified as benign or malignant by the TDS based model and the SVM polynomial classifier. Comparing the results, we showed that the latter model has a better f-measure then the TDS-based model (linear classifier) in the classification of skin lesions into two groups, malignant and benign. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    Science.gov (United States)

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  8. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    Directory of Open Access Journals (Sweden)

    Su Su Htay

    2013-01-01

    Full Text Available Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  9. Three-Dimensional Precession Feature Extraction of Ballistic Targets Based on Narrowband Radar Network

    Directory of Open Access Journals (Sweden)

    Zhao Shuang

    2017-02-01

    Full Text Available Micro-motion is a crucial feature used in ballistic target recognition. To address the problem that single-view observations cannot extract true micro-motion parameters, we propose a novel algorithm based on the narrowband radar network to extract three-dimensional precession features. First, we construct a precession model of the cone-shaped target, and as a precondition, we consider the invisible problem of scattering centers. We then analyze in detail the micro-Doppler modulation trait caused by the precession. Then, we match each scattering center in different perspectives based on the ratio of the top scattering center’s micro-Doppler frequency modulation coefficient and extract the 3D coning vector of the target by establishing associated multi-aspect equation systems. In addition, we estimate feature parameters by utilizing the correlation of the micro-Doppler frequency modulation coefficient of the three scattering centers combined with the frequency compensation method. We then calculate the coordinates of the conical point in each moment and reconstruct the 3D spatial portion. Finally, we provide simulation results to validate the proposed algorithm.

  10. Extraction of enclosure culture area from SPOT-5 image based on texture feature

    Science.gov (United States)

    Tang, Wei; Zhao, Shuhe; Ma, Ronghua; Wang, Chunhong; Zhang, Shouxuan; Li, Xinliang

    2007-06-01

    The east Taihu lake region is characterized by high-density and large areas of enclosure culture area which tend to cause eutrophication of the lake and worsen the quality of its water. This paper takes an area (380×380) of the east Taihu Lake from image as an example and discusses the extraction method of combing texture feature of high resolution image with spectrum information. Firstly, we choose the best combination bands of 1, 3, 4 according to the principles of the maximal entropy combination and OIF index. After applying algorithm of different bands and principal component analysis (PCA) transformation, we realize dimensional reduction and data compression. Subsequently, textures of the first principal component image are analyzed using Gray Level Co-occurrence Matrices (GLCM) getting statistic Eigen values of contrast, entropy and mean. The mean Eigen value is fixed as an optimal index and a appropriate conditional thresholds of extraction are determined. Finally, decision trees are established realizing the extraction of enclosure culture area. Combining the spectrum information with the spatial texture feature, we obtain a satisfied extracted result and provide a technical reference for a wide-spread survey of the enclosure culture area.

  11. A Novel Method for PD Feature Extraction of Power Cable with Renyi Entropy

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2015-11-01

    Full Text Available Partial discharge (PD detection can effectively achieve the status maintenance of XLPE (Cross Linked Polyethylene cable, so it is the direction of the development of equipment maintenance in power systems. At present, a main method of PD detection is the broadband electromagnetic coupling with a high-frequency current transformer (HFCT. Due to the strong electromagnetic interference (EMI generated among the mass amount of cables in a tunnel and the impedance mismatching of HFCT and the data acquisition equipment, the features of the pulse current generated by PD are often submerged in the background noise. The conventional method for the stationary signal analysis cannot analyze the PD signal, which is transient and non-stationary. Although the algorithm of Shannon wavelet singular entropy (SWSE can be used to analyze the PD signal at some level, its precision and anti-interference capability of PD feature extraction are still insufficient. For the above problem, a novel method named Renyi wavelet packet singular entropy (RWPSE is proposed and applied to the PD feature extraction on power cables. Taking a three-level system as an example, we analyze the statistical properties of Renyi entropy and the intrinsic correlation with Shannon entropy under different values of α . At the same time, discrete wavelet packet transform (DWPT is taken instead of discrete wavelet transform (DWT, and Renyi entropy is combined to construct the RWPSE algorithm. Taking the grounding current signal from the shielding layer of XLPE cable as the research object, which includes the current pulse feature of PD, the effectiveness of the novel method is tested. The theoretical analysis and experimental results show that compared to SWSE, RWPSE can not only improve the feature extraction accuracy for PD, but also can suppress EMI effectively.

  12. A new method to extract stable feature points based on self-generated simulation images

    Science.gov (United States)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  13. Enhanced robustness of myoelectric pattern recognition to across-day variation through invariant feature extraction.

    Science.gov (United States)

    Liu, Jianwei; Zhang, Dingguo; Sheng, Xinjun; Zhu, Xiangyang

    2015-01-01

    Robust pattern recognition is critical for myoelectric prosthesis (MP) developed in the laboratory to be used in real life. This study focuses on the robustness of MP control during the usage across many days. Due to the variability inhered in extended electromyography (EMG) signals, the distribution of EMG features extracted from several days' data may have large intra-class scatter. However, as the subjects perform the same motion type in different days, we hypothesize there exist some invariant characteristics in the EMG features. Therefore, give a set of training data from several days, it is possible to find an invariant component in them. To this end, an invariant feature extraction (IFE) framework based on kernel fisher discriminant analysis is proposed. A desired transformation, which minimizes the intra-class (within a motion type) scatter meanwhile maximizes the inter-class (between different motion types) scatter, is found. Five intact-limbed subjects and three transradial-amputee subjects participated in an experiment lasting ten days. The results show that the generalization ability of the classifier trained on previous days to the unseen testing days can be improved by IFE. IFE significantly outperforms Baseline (original input feature) in classification accuracy, both for intact-limbed subjects and amputee subjects (average 88.97% vs. 91.20% and 85.09% vs. 88.22%, p <; 0.05).

  14. IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK FOR FACE RECOGNITION USING GABOR FEATURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Muthukannan K

    2013-11-01

    Full Text Available Face detection and recognition is the first step for many applications in various fields such as identification and is used as a key to enter into the various electronic devices, video surveillance, and human computer interface and image database management. This paper focuses on feature extraction in an image using Gabor filter and the extracted image feature vector is then given as an input to the neural network. The neural network is trained with the input data. The Gabor wavelet concentrates on the important components of the face including eye, mouth, nose, cheeks. The main requirement of this technique is the threshold, which gives privileged sensitivity. The threshold values are the feature vectors taken from the faces. These feature vectors are given into the feed forward neural network to train the network. Using the feed forward neural network as a classifier, the recognized and unrecognized faces are classified. This classifier attains a higher face deduction rate. By training more input vectors the system proves to be effective. The effectiveness of the proposed method is demonstrated by the experimental results.

  15. Effect of Feature Extraction on Automatic Sleep Stage Classification by Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Prucnal Monika

    2017-06-01

    Full Text Available EEG signal-based sleep stage classification facilitates an initial diagnosis of sleep disorders. The aim of this study was to compare the efficiency of three methods for feature extraction: power spectral density (PSD, discrete wavelet transform (DWT and empirical mode decomposition (EMD in the automatic classification of sleep stages by an artificial neural network (ANN. 13650 30-second EEG epochs from the PhysioNet database, representing five sleep stages (W, N1-N3 and REM, were transformed into feature vectors using the aforementioned methods and principal component analysis (PCA. Three feed-forward ANNs with the same optimal structure (12 input neurons, 23 + 22 neurons in two hidden layers and 5 output neurons were trained using three sets of features, obtained with one of the compared methods each. Calculating PSD from EEG epochs in frequency sub-bands corresponding to the brain waves (81.1% accuracy for the testing set, comparing with 74.2% for DWT and 57.6% for EMD appeared to be the most effective feature extraction method in the analysed problem.

  16. A Novel Approach Based on Data Redundancy for Feature Extraction of EEG Signals.

    Science.gov (United States)

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Kamel, Nidal; Hussain, Muhammad

    2016-03-01

    Feature extraction and classification for electroencephalogram (EEG) in medical applications is a challenging task. The EEG signals produce a huge amount of redundant data or repeating information. This redundancy causes potential hurdles in EEG analysis. Hence, we propose to use this redundant information of EEG as a feature to discriminate and classify different EEG datasets. In this study, we have proposed a JPEG2000 based approach for computing data redundancy from multi-channels EEG signals and have used the redundancy as a feature for classification of EEG signals by applying support vector machine, multi-layer perceptron and k-nearest neighbors classifiers. The approach is validated on three EEG datasets and achieved high accuracy rate (95-99 %) in the classification. Dataset-1 includes the EEG signals recorded during fluid intelligence test, dataset-2 consists of EEG signals recorded during memory recall test, and dataset-3 has epileptic seizure and non-seizure EEG. The findings demonstrate that the approach has the ability to extract robust feature and classify the EEG signals in various applications including clinical as well as normal EEG patterns.

  17. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    Science.gov (United States)

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.

  18. Feature extraction from 3D lidar point clouds using image processing methods

    Science.gov (United States)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  19. Feature Extraction in the North Sinai Desert Using Spaceborne Synthetic Aperture Radar: Potential Archaeological Applications

    Directory of Open Access Journals (Sweden)

    Christopher Stewart

    2016-10-01

    Full Text Available Techniques were implemented to extract anthropogenic features in the desert region of North Sinai using data from the first- and second-generation Phased Array type L-band Synthetic Aperture Radar (PALSAR-1 and 2. To obtain a synoptic view over the study area, a mosaic of average, multitemporal (De Grandi filtered PALSAR-1 σ° backscatter of North Sinai was produced. Two subset regions were selected for further analysis. The first included an area of abundant linear features of high relative backscatter in a strategic, but sparsely developed area between the Wadi Tumilat and Gebel Maghara. The second included an area of low backscatter anomaly features in a coastal sabkha around the archaeological sites of Tell el-Farama, Tell el-Mahzan, and Tell el-Kanais. Over the subset region between the Wadi Tumilat and Gebel Maghara, algorithms were developed to extract linear features and convert them to vector format to facilitate interpretation. The algorithms were based on mathematical morphology, but to distinguish apparent man-made features from sand dune ridges, several techniques were applied. The first technique took as input the average σ° backscatter and used a Digital Elevation Model (DEM derived Local Incidence Angle (LAI mask to exclude sand dune ridges. The second technique, which proved more effective, used the average interferometric coherence as input. Extracted features were compared with other available information layers and in some cases revealed partially buried roads. Over the coastal subset region a time series of PALSAR-2 spotlight data were processed. The coefficient of variation (CoV of De Grandi filtered imagery clearly revealed anomaly features of low CoV. These were compared with the results of an archaeological field walking survey carried out previously. The features generally correspond with isolated areas identified in the field survey as having a higher density of archaeological finds, and interpreted as possible

  20. Extracting features buried within high density atom probe point cloud data through simplicial homology.

    Science.gov (United States)

    Srinivasan, Srikant; Kaluskar, Kaustubh; Broderick, Scott; Rajan, Krishna

    2015-12-01

    Feature extraction from Atom Probe Tomography (APT) data is usually performed by repeatedly delineating iso-concentration surfaces of a chemical component of the sample material at different values of concentration threshold, until the user visually determines a satisfactory result in line with prior knowledge. However, this approach allows for important features, buried within the sample, to be visually obscured by the high density and volume (~10(7) atoms) of APT data. This work provides a data driven methodology to objectively determine the appropriate concentration threshold for classifying different phases, such as precipitates, by mapping the topology of the APT data set using a concept from algebraic topology termed persistent simplicial homology. A case study of Sc precipitates in an Al-Mg-Sc alloy is presented demonstrating the power of this technique to capture features, such as precise demarcation of Sc clusters and Al segregation at the cluster boundaries, not easily available by routine visual adjustment.