WorldWideScience

Sample records for extract texture features

  1. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  2. Texture Feature Extraction and Classification for Iris Diagnosis

    Science.gov (United States)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  3. An Advanced Approach to Extraction of Colour Texture Features Based on GLCM

    OpenAIRE

    Miroslav Benco; Robert Hudec; Patrik Kamencay; Martina Zachariasova; Slavomir Matuska

    2014-01-01

    This paper discusses research in the area of texture image classification. More specifically, the combination of texture and colour features is researched. The principle objective is to create a robust descriptor for the extraction of colour texture features. The principles of two well-known methods for grey- level texture feature extraction, namely GLCM (grey- level co-occurrence matrix) and Gabor filters, are used in experiments. For the texture classification, the support vector machine is...

  4. Novel Method for Color Textures Features Extraction Based on GLCM

    Directory of Open Access Journals (Sweden)

    R. Hudec

    2007-12-01

    Full Text Available Texture is one of most popular features for image classification and retrieval. Forasmuch as grayscale textures provide enough information to solve many tasks, the color information was not utilized. But in the recent years, many researchers have begun to take color information into consideration. In the texture analysis field, many algorithms have been enhanced to process color textures and new ones have been researched. In this paper the new method for color GLCM textures and comparing with other good known methods is presented.

  5. Texture features analysis for coastline extraction in remotely sensed images

    Science.gov (United States)

    De Laurentiis, Raimondo; Dellepiane, Silvana G.; Bo, Giancarlo

    2002-01-01

    The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.

  6. A Method of SAR Target Recognition Based on Gabor Filter and Local Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Wang Lu

    2015-12-01

    Full Text Available This paper presents a novel texture feature extraction method based on a Gabor filter and Three-Patch Local Binary Patterns (TPLBP for Synthetic Aperture Rader (SAR target recognition. First, SAR images are processed by a Gabor filter in different directions to enhance the significant features of the targets and their shadows. Then, the effective local texture features based on the Gabor filtered images are extracted by TPLBP. This not only overcomes the shortcoming of Local Binary Patterns (LBP, which cannot describe texture features for large scale neighborhoods, but also maintains the rotation invariant characteristic which alleviates the impact of the direction variations of SAR targets on recognition performance. Finally, we use an Extreme Learning Machine (ELM classifier and extract the texture features. The experimental results of MSTAR database demonstrate the effectiveness of the proposed method.

  7. Texture Feature Extraction Method Combining Nonsubsampled Contour Transformation with Gray Level Co-occurrence Matrix

    Directory of Open Access Journals (Sweden)

    Xiaolan He

    2013-12-01

    Full Text Available Gray level co-occurrence matrix (GLCM is an important method to extract the image texture features of synthetic aperture radar (SAR. However, GLCM can only extract the textures under single scale and single direction. A kind of texture feature extraction method combining nonsubsampled contour transformation (NSCT and GLCM is proposed, so as to achieve the extraction of texture features under multi-scale and multi-direction. We firstly conducted multi-scale and multi-direction decomposition on the SAR images with NSCT, secondly extracted the symbiosis amount with GLCM from the obtained sub-band images, then conducted the correlation analysis for the extracted symbiosis amount to remove the redundant characteristic quantity; and combined it with the gray features to constitute the multi-feature vector. Finally, we made full use of the advantages of the support vector machine in the aspects of small sample database and generalization ability, and completed the division of multi-feature vector space by SVM so as to achieve the SAR image segmentation. The results of the experiment showed that the segmentation accuracy rate could be improved and good edge retention effect could be obtained through using the GLCM texture extraction method based on NSCT domain and multi-feature fusion in the SAR image segmentation.

  8. An Advanced Approach to Extraction of Colour Texture Features Based on GLCM

    Directory of Open Access Journals (Sweden)

    Miroslav Benco

    2014-07-01

    Full Text Available This paper discusses research in the area of texture image classification. More specifically, the combination of texture and colour features is researched. The principle objective is to create a robust descriptor for the extraction of colour texture features. The principles of two well-known methods for grey- level texture feature extraction, namely GLCM (grey- level co-occurrence matrix and Gabor filters, are used in experiments. For the texture classification, the support vector machine is used. In the first approach, the methods are applied in separate channels in the colour image. The experimental results show the huge growth of precision for colour texture retrieval by GLCM. Therefore, the GLCM is modified for extracting probability matrices directly from the colour image. The method for 13 directions neighbourhood system is proposed and formulas for probability matrices computation are presented. The proposed method is called CLCM (colour-level co-occurrence matrices and experimental results show that it is a powerful method for colour texture classification.

  9. Low-Level Color and Texture Feature Extraction of Coral Reef Components

    Directory of Open Access Journals (Sweden)

    Ma. Sheila Angeli Marcos

    2003-06-01

    Full Text Available The purpose of this study is to develop a computer-based classifier that automates coral reef assessmentfrom digitized underwater video. We extract low-level color and texture features from coral images toserve as input to a high-level classifier. Low-level features for color were labeled blue, green, yellow/brown/orange, and gray/white, which are described by the normalized chromaticity histograms of thesemajor colors. The color matching capability of these features was determined through a technique called“Histogram Backprojection”. The low-level texture feature marks a region as coarse or fine dependingon the gray-level variance of the region.

  10. Improving Identification of Area Targets by Integrated Analysis of Hyperspectral Data and Extracted Texture Features

    Science.gov (United States)

    2012-09-01

    Imaging Spectrometer B Blue CA California FWHM Full Width Half Max G Green GIS Geographic Information System GLCM Gray Level Co-occurrence... GLCM ). From this GLCM the quantities known as texture features are extracted. The textures studied in his landmark paper were: angular second...defines the number of surrounding pixels that are used to create the GLCM . A 3x3 window would only include the 8 pixels immediately adjacent to the

  11. Application of Texture Characteristics for Urban Feature Extraction from Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    D.Shanmukha Rao

    2014-12-01

    Full Text Available Quest of fool proof methods for extracting various urban features from high resolution satellite imagery with minimal human intervention has resulted in developing texture based algorithms. In view of the fact that the textural properties of images provide valuable information for discrimination purposes, it is appropriate to employ texture based algorithms for feature extraction. The Gray Level Co-occurrence Matrix (GLCM method represents a highly efficient technique of extracting second order statistical texture features. The various urban features can be distinguished based on a set of features viz. energy, entropy, homogeneity etc. that characterize different aspects of the underlying texture. As a preliminary step, notable numbers of regions of interests of the urban feature and contrast locations are identified visually. After calculating Gray Level Co-occurrence matrices of these selected regions, the aforementioned texture features are computed. These features can be used to shape a high-dimensional feature vector to carry out content based retrieval. The insignificant features are eliminated to reduce the dimensionality of the feature vector by executing Principal Components Analysis (PCA. The selection of the discriminating features is also aided by the value of Jeffreys-Matusita (JM distance which serves as a measure of class separability Feature identification is then carried out by computing these chosen feature vectors for every pixel of the entire image and comparing it with their corresponding mean values. This helps in identifying and classifying the pixels corresponding to urban feature being extracted. To reduce the commission errors, various index values viz. Soil Adjusted Vegetation Index (SAVI, Normalized Difference Vegetation Index (NDVI and Normalized Difference Water Index (NDWI are assessed for each pixel. The extracted output is then median filtered to isolate the feature of interest after removing the salt and pepper

  12. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    Science.gov (United States)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  13. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  14. Spectral and bispectral feature-extraction neural networks for texture classification

    Science.gov (United States)

    Kameyama, Keisuke; Kosugi, Yukio

    1997-10-01

    A neural network model (Kernel Modifying Neural Network: KM Net) specialized for image texture classification, which unifies the filtering kernels for feature extraction and the layered network classifier, will be introduced. The KM Net consists of a layer of convolution kernels that are constrained to be 2D Gabor filters to guarantee efficient spectral feature localization. The KM Net enables an automated feature extraction in multi-channel texture classification through simultaneous modification of the Gabor kernel parameters (central frequency and bandwidth) and the connection weights of the subsequent classifier layers by a backpropagation-based training rule. The capability of the model and its training rule was verified via segmentation of common texture mosaic images. In comparison with the conventional multi-channel filtering method which uses numerous filters to cover the spatial frequency domain, the proposed strategy can greatly reduce the computational cost both in feature extraction and classification. Since the adaptive Gabor filtering scheme is also applicable to band selection in moment spectra of higher orders, the network model was extended for adaptive bispectral filtering for extraction of the phase relation among the frequency components. The ability of this Bispectral KM Net was demonstrated in the discrimination of visually discriminable synthetic textures with identical local power spectral distributions.

  15. Hardwood species classification with DWT based hybrid texture feature extraction techniques

    Indian Academy of Sciences (India)

    Arvind R Yadav; R S Anand; M L Dewal; Sangeeta Gupta

    2015-12-01

    In this work, discrete wavelet transform (DWT) based hybrid texture feature extraction techniques have been used to categorize the microscopic images of hardwood species into 75 different classes. Initially, the DWT has been employed to decompose the image up to 7 levels using Daubechies (db3) wavelet as decomposition filter. Further, first-order statistics (FOS) and four variants of local binary pattern (LBP) descriptors are used to acquire distinct features of these images at various levels. The linear support vector machine (SVM), radial basis function (RBF) kernel SVM and random forest classifiers have been employed for classification. The classification accuracy obtained with state-of-the-art and DWT based hybrid texture features using various classifiers are compared. The DWT based FOS-uniform local binary pattern (DWTFOSLBPu2) texture features at the 4th level of image decomposition have produced best classification accuracy of 97.67 ± 0.79% and 98.40 ± 064% for grayscale and RGB images, respectively, using linear SVM classifier. Reduction in feature dataset by minimal redundancy maximal relevance (mRMR) feature selection method is achieved and the best classification accuracy of 99.00 ± 0.79% and 99.20 ± 0.42% have been obtained for DWT based FOS-LBP histogram Fourier features (DWTFOSLBP-HF) technique at the 5th and 6th levels of image decomposition for grayscale and RGB images, respectively, using linear SVM classifier. The DWTFOSLBP-HF features selected with mRMR method has also established superiority amongst the DWT based hybrid texture feature extraction techniques for randomly divided database into different proportions of training and test datasets.

  16. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  17. Extraction of enclosure culture area from SPOT-5 image based on texture feature

    Science.gov (United States)

    Tang, Wei; Zhao, Shuhe; Ma, Ronghua; Wang, Chunhong; Zhang, Shouxuan; Li, Xinliang

    2007-06-01

    The east Taihu lake region is characterized by high-density and large areas of enclosure culture area which tend to cause eutrophication of the lake and worsen the quality of its water. This paper takes an area (380×380) of the east Taihu Lake from image as an example and discusses the extraction method of combing texture feature of high resolution image with spectrum information. Firstly, we choose the best combination bands of 1, 3, 4 according to the principles of the maximal entropy combination and OIF index. After applying algorithm of different bands and principal component analysis (PCA) transformation, we realize dimensional reduction and data compression. Subsequently, textures of the first principal component image are analyzed using Gray Level Co-occurrence Matrices (GLCM) getting statistic Eigen values of contrast, entropy and mean. The mean Eigen value is fixed as an optimal index and a appropriate conditional thresholds of extraction are determined. Finally, decision trees are established realizing the extraction of enclosure culture area. Combining the spectrum information with the spatial texture feature, we obtain a satisfied extracted result and provide a technical reference for a wide-spread survey of the enclosure culture area.

  18. Detection of Brain Tumor and Extraction of Texture Features using Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Prof. Dilip Kumar Gandhi

    2012-10-01

    Full Text Available Brain Cancer Detection system is designed. Aim of this paper is to locate the tumor and determine the texture features from a Brain Cancer affected MRI. A computer based diagnosis is performed in order to detect the tumors from given Magnetic Resonance Image. Basic image processing techniques are used to locate the tumor region. Basic techniques consist of image enhancement, image bianarization, and image morphological operations. Texture features are computed using the Gray Level Co-occurrence Matrix. Texture features consists of five distinct features. Selective features or the combination of selective features will be used in the future to determine the class of the query image. Astrocytoma type of Brain Cancer affected images are used only for simplicity

  19. Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients.

    Science.gov (United States)

    Chaddad, Ahmad; Tanougast, Camel

    2016-11-01

    GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value GLCM analyses in both the diagnosis and prognosis of this patient population.

  20. WAVELET BASED CONTENT BASED IMAGE RETRIEVAL USING COLOR AND TEXTURE FEATURE EXTRACTION BY GRAY LEVEL COOCURENCE MATRIX AND COLOR COOCURENCE MATRIX

    Directory of Open Access Journals (Sweden)

    Jeyanthi Prabhu

    2014-01-01

    Full Text Available In this study we proposes an effective content based image retrieval by color and texture based on wavelet coefficient method to achieve good retrieval in efficiency. Color feature extraction is done by color Histogram. The texture feature extraction is acquired by Gray Level Coocurence Matrix (GLCM or Color Coocurence Matrix (CCM. This study provides better result for image retrieval by integrated features. Feature extraction by color Histogram, texture by GLCM, texture by CCM are compared in terms of precision performance measure.

  1. Application of computer-extracted breast tissue texture features in predicting false-positive recalls from screening mammography

    Science.gov (United States)

    Ray, Shonket; Choi, Jae Y.; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2014-03-01

    Mammographic texture features have been shown to have value in breast cancer risk assessment. Previous models have also been developed that use computer-extracted mammographic features of breast tissue complexity to predict the risk of false-positive (FP) recall from breast cancer screening with digital mammography. This work details a novel locallyadaptive parenchymal texture analysis algorithm that identifies and extracts mammographic features of local parenchymal tissue complexity potentially relevant for false-positive biopsy prediction. This algorithm has two important aspects: (1) the adaptive nature of automatically determining an optimal number of region-of-interests (ROIs) in the image and each ROI's corresponding size based on the parenchymal tissue distribution over the whole breast region and (2) characterizing both the local and global mammographic appearances of the parenchymal tissue that could provide more discriminative information for FP biopsy risk prediction. Preliminary results show that this locallyadaptive texture analysis algorithm, in conjunction with logistic regression, can predict the likelihood of false-positive biopsy with an ROC performance value of AUC=0.92 (pclinical implications of using prediction models incorporating these texture features may include the future development of better tools and guidelines regarding personalized breast cancer screening recommendations. Further studies are warranted to prospectively validate our findings in larger screening populations and evaluate their clinical utility.

  2. A Method of Soil Salinization Information Extraction with SVM Classification Based on ICA and Texture Features

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fei; TASHPOLAT Tiyip; KUNG Hsiang-te; DING Jian-li; MAMAT.Sawut; VERNER Johnson; HAN Gui-hong; GUI Dong-wei

    2011-01-01

    Salt-affected soils classification using remotely sensed images is one of the most common applications in remote sensing,and many algorithms have been developed and applied for this purpose in the literature.This study takes the Delta Oasis of Weigan and Kuqa Rivers as a study area and discusses the prediction of soil salinization from ETM+ Landsat data.It reports the Support Vector Machine(SVM) classification method based on Independent Component Analysis(ICA) and Texture features.Meanwhile,the letter introduces the fundamental theory of SVM algorithm and ICA,and then incorporates ICA and texture features.The classification result is compared with ICA-SVM classification,single data source SVM classification,maximum likelihood classification(MLC) and neural network classification qualitatively and quantitatively.The result shows that this method can effectively solve the problem of low accuracy and fracture classification result in single data source classification.It has high spread ability toward higher array input.The overall accuracy is 98.64%,which increases by 10.2% compared with maximum likelihood classification,even increases by 12.94% compared with neural net classification,and thus acquires good effectiveness.Therefore,the classification method based on SVM and incorporating the ICA and texture features can be adapted to RS image classification and monitoring of soil salinization.

  3. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    Science.gov (United States)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these

  4. Computer-aided diagnosis of interstitial lung disease: a texture feature extraction and classification approach

    Science.gov (United States)

    Vargas-Voracek, Rene; McAdams, H. Page; Floyd, Carey E., Jr.

    1998-06-01

    An approach for the classification of normal or abnormal lung parenchyma from selected regions of interest (ROIs) of chest radiographs is presented for computer aided diagnosis of interstitial lung disease (ILD). The proposed approach uses a feed-forward neural network to classify each ROI based on a set of isotropic texture measures obtained from the joint grey level distribution of pairs of pixels separated by a specific distance. Two hundred ROIs, each 64 X 64 pixels in size (11 X 11 mm), were extracted from digitized chest radiographs for testing. Diagnosis performance was evaluated with the leave-one-out method. Classification of independent ROIs achieved a sensitivity of 90% and a specificity of 84% with an area under the receiver operating characteristic curve of 0.85. The diagnosis for each patient was correct for all cases when a `majority vote' criterion for the classification of the corresponding ROIs was applied to issue a normal or ILD patient classification. The proposed approach is a simple, fast, and consistent method for computer aided diagnosis of ILD with a very good performance. Further research will include additional cases, including differential diagnosis among ILD manifestations.

  5. Texture based feature extraction methods for content based medical image retrieval systems.

    Science.gov (United States)

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  6. Effect of Aqueous Extract of the Seaweed Gracilaria domingensis on the Physicochemical, Microbiological, and Textural Features of Fermented Milks.

    Science.gov (United States)

    Tavares Estevam, Adriana Carneiro; Alonso Buriti, Flávia Carolina; de Oliveira, Tiago Almeida; Pereira, Elainy Virginia Dos Santos; Florentino, Eliane Rolim; Porto, Ana Lúcia Figueiredo

    2016-04-01

    The effects of the Gracilaria domingensis seaweed aqueous extract in comparison with gelatin on the physicochemical, microbial, and textural characteristics of fermented milks processed with the mixed culture SAB 440 A, composed of Streptococcus thermophilus, Lactobacillus acidophilus, and Bifidobacterium animalis ssp. lactis, were investigated. The addition of G. domingensis aqueous extract did not affect pH, titratable acidity, and microbial viability of fermented milks when compared with the control (with no texture modifier) and the products with added gelatin. Fermented milk with added the seaweed aqueous extract showed firmness, consistency, cohesiveness, and viscosity index at least 10% higher than those observed for the control product (P < 0.05). At 4 h of fermentation, the fermented milks with only G. domingensis extract showed a texture comparable to that observed for products containing only gelatin. At 5 h of fermentation, firmness and consistency increased significantly (P < 0.05) in products with only seaweed extract added, a behavior not observed in products with the full amount of gelatin, probably due to the differences between the interactions of these ingredients with casein during the development of the gel network throughout the acidification of milk. The G. domingensis aqueous extract appears as a promising gelatin alternative to be used as texture modifier in fermented milks and related dairy products.

  7. Texture Feature Extraction Method Fused with LBP and GLCM%融合LBP和GLCM的纹理特征提取方法

    Institute of Scientific and Technical Information of China (English)

    王国德; 张培林; 任国全; 寇玺

    2012-01-01

    为提取有效的特征用于纹理描述和分类,提出一种融合局部二进制模式(LBP)和灰度共生矩阵(GLCM)的纹理特征提取方法.利用旋转不变的LBP算子处理纹理图像,得到I.BP图像及其GLCM,采用对比度、相关性、能量和逆差矩描述图像的纹理特征.实验结果表明,与其他方法相比,该方法提取的纹理特征具有更强的纹理鉴别能力,平均分类正确率达到93%.%In order to extract effective features for texture description and classification, this paper proposes a texture feature extraction method fused with Local Binary Pattern(LBP) and Gray-level Co-occurrence Matrix(GLCM). The texture image is processed by rotation invariant LBP operator. The LBP image is obtained and its GLCMs are calculated. Contrast, correlation, energy and inverse difference moment are imposed for texture description. Experimental results show that, compared with other methods, the proposed method is more effective in texture feature extraction and the average classification accuracy reaches to 93%.

  8. Genetic Feature Selection for Texture Classification

    Institute of Scientific and Technical Information of China (English)

    PAN Li; ZHENG Hong; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    This paper presents a novel approach to feature subset selection using genetic algorithms. This approach has the ability to accommodate multiple criteria such as the accuracy and cost of classification into the process of feature selection and finds the effective feature subset for texture classification. On the basis of the effective feature subset selected, a method is described to extract the objects which are higher than their surroundings, such as trees or forest, in the color aerial images. The methodology presented in this paper is illustrated by its application to the problem of trees extraction from aerial images.

  9. Evaluation of textural features for multispectral images

    Science.gov (United States)

    Bayram, Ulya; Can, Gulcan; Duzgun, Sebnem; Yalabik, Nese

    2011-11-01

    Remote sensing is a field that has wide use, leading to the fact that it has a great importance. Therefore performance of selected features plays a great role. In order to gain some perspective on useful textural features, we have brought together state-of-art textural features in recent literature, yet to be applied in remote sensing field, as well as presenting a comparison with traditional ones. Therefore we selected most commonly used textural features in remote sensing that are grey-level co-occurrence matrix (GLCM) and Gabor features. Other selected features are local binary patterns (LBP), edge orientation features extracted after applying steerable filter, and histogram of oriented gradients (HOG) features. Color histogram feature is also used and compared. Since most of these features are histogram-based, we have compared performance of bin-by-bin comparison with a histogram comparison method named as diffusion distance method. During obtaining performance of each feature, k-nearest neighbor classification method (k-NN) is applied.

  10. Wood recognition using image texture features.

    Directory of Open Access Journals (Sweden)

    Hang-jun Wang

    Full Text Available Inspired by theories of higher local order autocorrelation (HLAC, this paper presents a simple, novel, yet very powerful approach for wood recognition. The method is suitable for wood database applications, which are of great importance in wood related industries and administrations. At the feature extraction stage, a set of features is extracted from Mask Matching Image (MMI. The MMI features preserve the mask matching information gathered from the HLAC methods. The texture information in the image can then be accurately extracted from the statistical and geometrical features. In particular, richer information and enhanced discriminative power is achieved through the length histogram, a new histogram that embodies the width and height histograms. The performance of the proposed approach is compared to the state-of-the-art HLAC approaches using the wood stereogram dataset ZAFU WS 24. By conducting extensive experiments on ZAFU WS 24, we show that our approach significantly improves the classification accuracy.

  11. Feature-aware natural texture synthesis

    KAUST Repository

    Wu, Fuzhang

    2014-12-04

    This article presents a framework for natural texture synthesis and processing. This framework is motivated by the observation that given examples captured in natural scene, texture synthesis addresses a critical problem, namely, that synthesis quality can be affected adversely if the texture elements in an example display spatially varied patterns, such as perspective distortion, the composition of different sub-textures, and variations in global color pattern as a result of complex illumination. This issue is common in natural textures and is a fundamental challenge for previously developed methods. Thus, we address it from a feature point of view and propose a feature-aware approach to synthesize natural textures. The synthesis process is guided by a feature map that represents the visual characteristics of the input texture. Moreover, we present a novel adaptive initialization algorithm that can effectively avoid the repeat and verbatim copying artifacts. Our approach improves texture synthesis in many images that cannot be handled effectively with traditional technologies.

  12. Unsupervised Multimodal Magnetic Resonance Images Segmentation and Multiple Sclerosis Lesions Extraction based on Edge and Texture Features

    Directory of Open Access Journals (Sweden)

    Tannaz AKBARPOUR

    2017-06-01

    Full Text Available Segmentation of Multiple Sclerosis (MS lesions is a crucial part of MS diagnosis and therapy. Segmentation of lesions is usually performed manually, exposing this process to human errors. Thus, exploiting automatic and semi-automatic methods is of interest. In this paper, a new method is proposed to segment MS lesions from multichannel MRI data (T1-W and T2-W. For this purpose, statistical features of spatial domain and wavelet coefficients of frequency domain are extracted for each pixel of skull-stripped images to form a feature vector. An unsupervised clustering algorithm is applied to group pixels and extracts lesions. Experimental results demonstrate that the proposed method is better than other state of art and contemporary methods of segmentation in terms of Dice metric, specificity, false-positive-rate, and Jaccard metric.

  13. 基于多特征的纹理特征提取方法研究与应用%Research and Application of Texture Feature Extraction Based on Multi-features

    Institute of Scientific and Technical Information of China (English)

    梅浪奇; 郭建明; 刘清

    2015-01-01

    纹理是图像的1种重要视觉特征,常用于识别和区分图像。纹理特征的提取则是其应用需首先解决的问题。通过总结分析目前较为常用的纹理特征提取方法,基于灰度共生矩阵(GLCM )算法、局部二值模式(LBP)算法和小波变换(DWT )算法的特点,提出基于多特征的纹理特征提取算法,即将各算法提取的特征进行融合。融合中使用权重对参数进行配置。论文设计了1种图像检索实验,通过图像检索实验比较了各算法提取的特征对纹理的描述能力。结果表明,对于Co rel图像库,笔者提出的多特征的纹理特征提取算法检索的平均查准率相对于GLCM 算法提高了20%,相对于LBP算法提高了9%,相对于DWT算法提高了10%,相对于徐少平等人提出的特征融合方法提高了15%。证实了文中所提出的算法能够兼顾各算法的优点,并具有较好的旋转不变性和尺度不变性。其不足之处是需要同时提取GLCM 算法,LBP算法,DWT 算法下的纹理特征,计算所需时间是后3种算法时间之和,使算法的实用性受到了一定的限制。%Texture is a significant visual feature which is commonly used to identify and distinguish the image .. This paper summarized and analyzed the current common method of texture feature extraction ,including Gray Level Co‐occurrence Matrix (GLCM ) ,Local Binary Pattern (LBP) and Discrete wavelet transform (DWT ) .With the weight of configuration parameters ,a new texture extraction method of multi‐features is proposed and implemented ,which com‐bines the three basic methods mentioned .The image texture description ability of different methods is compared through the tests on the image retrieval system .The results show that the average precision of texture feature extraction method based on multi‐feature combination increased 20% comparing to GLCM algorithm ;increased 9% comparing to LBP algo

  14. Analysis of Contourlet Texture Feature Extraction to Classify the Benign and Malignant Tumors from Breast Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Prabhakar Telagarapu

    2014-03-01

    Full Text Available The number of Breast cancer has been increasing over the past three decades. Early detection of breast cancer is crucial for an effective treatment. Mammography is used for early detection and screening. Especially for young women, mammography procedures may not be very comfortable. Moreover, it involves ionizing radiation. Ultrasound is broadly popular medical imaging modality because of its non-invasive, real time, convenient and low cost nature. However, the excellence of ultrasound image is corrupted by a speckle noise. The presence of speckle noise severely degrades the signal-to noise ratio (SNR and contrast resolution of the image. Therefore speckle noise need to be reduced before extracting the features. In this research focus on developing an algorithm to reduce the speckle noise, feature extraction and classification methods for benign and malignant tumors showed that SVM-Polynomial classification produces a high classification rate (77% for Grey level Co-occurrence matrix (GLCM based Contourlet features for wavelet soft thresholding denoised breast ultrasound images.

  15. Accurate Image Retrieval Algorithm Based on Color and Texture Feature

    Directory of Open Access Journals (Sweden)

    Chunlai Yan

    2013-06-01

    Full Text Available Content-Based Image Retrieval (CBIR is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature of the image, CBIR aims to find images that contain specified content (feature in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.

  16. Optical devices featuring textured semiconductor layers

    Science.gov (United States)

    Moustakas, Theodore D.; Cabalu, Jasper S.

    2011-10-11

    A semiconductor sensor, solar cell or emitter, or a precursor therefor, has a substrate and one or more textured semiconductor layers deposited onto the substrate. The textured layers enhance light extraction or absorption. Texturing in the region of multiple quantum wells greatly enhances internal quantum efficiency if the semiconductor is polar and the quantum wells are grown along the polar direction. Electroluminescence of LEDs of the invention is dichromatic, and results in variable color LEDs, including white LEDs, without the use of phosphor.

  17. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  18. Texture feature based liver lesion classification

    Science.gov (United States)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  19. GLCM textural features for Brain Tumor Classification

    Directory of Open Access Journals (Sweden)

    N S Zulpe

    2012-05-01

    Full Text Available Automatic recognition system for medical images is challenging task in the field of medical image processing. Medical images acquired from different modalities such as Computed Tomography (CT, Magnetic Resonance Imaging (MRI, etc which are used for the diagnosis purpose. In the medical field, brain tumor classification is very important phase for the further treatment. Human interpretation of large number of MRI slices (Normal or Abnormal may leads to misclassification hence there is need of such a automated recognition system, which can classify the type of the brain tumor. In this research work, we used four different classes of brain tumors and extracted the GLCM based textural features of each class, and applied to two-layered Feed forward Neural Network, which gives 97.5% classification rate.

  20. 基于共生矩阵纹理特征提取的改进算法%Improved texture feature extraction algorithm based on GLCM

    Institute of Scientific and Technical Information of China (English)

    龚家强; 李晓宁

    2011-01-01

    After studly for GLCM and its improved algorithm, especialy for its excessive computational burden, a novel method based on grey level co-occurrence hybrid structure (GLCHS) and discrete Fourier transform is presented to achieve texture feature extraction. First,the result image is devided into several blocks which can reduce the number of gray levels. Then, gray normalization is done to reduce the range of the gray value. Finally, the GLCHS is used to compute the five dimensional vectors to describe the image' s texture. The experiment results indicate that the improved method reduces the computational complexity and greatly reduces the time of feature extraction.%深入研究灰度共生矩阵及其改进算法,针对其计算量大、耗时等问题,提出一种基于灰度共生混合结构和离散傅立叶变换的方法来实现纹理特征的提取.对傅立叶变换后的频谱图进行分块计算,以此降低计算时的灰度级,再采用正规化的方式减少特征的分布范围,并利用灰度共生混合结构算法计算一个5维的特征向量来描述图像的纹理.实验结果表明,改进后的算法降低了计算复杂度,极大地减少了图像纹理特征提取的时间.

  1. Scene classification of infrared images based on texture feature

    Science.gov (United States)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    Scene Classification refers to as assigning a physical scene into one of a set of predefined categories. Utilizing the method texture feature is good for providing the approach to classify scenes. Texture can be considered to be repeating patterns of local variation of pixel intensities. And texture analysis is important in many applications of computer image analysis for classification or segmentation of images based on local spatial variations of intensity. Texture describes the structural information of images, so it provides another data to classify comparing to the spectrum. Now, infrared thermal imagers are used in different kinds of fields. Since infrared images of the objects reflect their own thermal radiation, there are some shortcomings of infrared images: the poor contrast between the objectives and background, the effects of blurs edges, much noise and so on. Because of these shortcomings, it is difficult to extract to the texture feature of infrared images. In this paper we have developed an infrared image texture feature-based algorithm to classify scenes of infrared images. This paper researches texture extraction using Gabor wavelet transform. The transformation of Gabor has excellent capability in analysis the frequency and direction of the partial district. Gabor wavelets is chosen for its biological relevance and technical properties In the first place, after introducing the Gabor wavelet transform and the texture analysis methods, the infrared images are extracted texture feature by Gabor wavelet transform. It is utilized the multi-scale property of Gabor filter. In the second place, we take multi-dimensional means and standard deviation with different scales and directions as texture parameters. The last stage is classification of scene texture parameters with least squares support vector machine (LS-SVM) algorithm. SVM is based on the principle of structural risk minimization (SRM). Compared with SVM, LS-SVM has overcome the shortcoming of

  2. Performance Analysis of Texture Image Classification Using Wavelet Feature

    Directory of Open Access Journals (Sweden)

    Dolly Choudhary

    2013-01-01

    Full Text Available This paper compares the performance of various classifiers for multi class image classification. Where the features are extracted by the proposed algorithm in using Haar wavelet coefficient. The wavelet features are extracted from original texture images and corresponding complementary images. As it is really very difficult to decide which classifier would show better performance for multi class image classification. Hence, this work is an analytical study of performance of various classifiers for the single multiclass classification problem. In this work fifteen textures are taken for classification using Feed Forward Neural Network, Naïve Bays Classifier, K-nearest neighbor Classifier and Cascaded Neural Network.

  3. Image Mining Using Texture and Shape Feature

    Directory of Open Access Journals (Sweden)

    Prof.Rupali Sawant

    2010-12-01

    Full Text Available Discovering knowledge from data stored in typical alphanumeric databases, such as relational databases, has been the focal point of most of the work in database mining. However, with advances in secondary and tertiary storage capacity, coupled with a relatively low storage cost, more and more non standard data (in the form of images is being accumulated. This vast collection of image data can also be mined to discover new and valuable knowledge. During theprocess of image mining, the concepts in different hierarchiesand their relationships are extracted from different hierarchies and granularities, and association rule mining and concept clustering are consequently implemented. The generalization and specialization of concepts are realized in different hierarchies, lower layer concepts can be upgraded to upper layer concepts, and upper layer concepts guide the extraction of lower layer concepts. It is a process from image data to image information, from image information to imageknowledge, from lower layer concepts to upper layer concept lattice and cloud model theory is proposed. The methods of image mining from image texture and shape features are introduced here, which include the following basic steps: firstly pre-process images secondly use cloud model to extract concepts, lastly use concept lattice to extracta series of image knowledge.

  4. Selective Extraction of Entangled Textures via Adaptive PDE Transform

    Directory of Open Access Journals (Sweden)

    Yang Wang

    2012-01-01

    Full Text Available Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method.

  5. Graph-based features for texture discrimination

    NARCIS (Netherlands)

    Grigorescu, Cosmin; Petkov, Nikolay; Sanfeliu, A; Villanueva, JJ; Vanrell, M; Alquezar, R; Huang, T; Serra, J

    2000-01-01

    Graph-based features, such as the number of connected components, edges of a given orientation and vertices per unit area, and the number of vertices and pixels per connected component, are proposed for the analysis of textures which consist of structural elements. The proposed set of features is

  6. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    Science.gov (United States)

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.

  7. Combining multiple features for color texture classification

    Science.gov (United States)

    Cusano, Claudio; Napoletano, Paolo; Schettini, Raimondo

    2016-11-01

    The analysis of color and texture has a long history in image analysis and computer vision. These two properties are often considered as independent, even though they are strongly related in images of natural objects and materials. Correlation between color and texture information is especially relevant in the case of variable illumination, a condition that has a crucial impact on the effectiveness of most visual descriptors. We propose an ensemble of hand-crafted image descriptors designed to capture different aspects of color textures. We show that the use of these descriptors in a multiple classifiers framework makes it possible to achieve a very high classification accuracy in classifying texture images acquired under different lighting conditions. A powerful alternative to hand-crafted descriptors is represented by features obtained with deep learning methods. We also show how the proposed combining strategy hand-crafted and convolutional neural networks features can be used together to further improve the classification accuracy. Experimental results on a food database (raw food texture) demonstrate the effectiveness of the proposed strategy.

  8. Parallel implementation of Gray Level Co-occurrence Matrices and Haralick texture features on cell architecture

    NARCIS (Netherlands)

    Shahbahrami, A.; Pham, T.A.; Bertels, K.L.M.

    2011-01-01

    Texture features extraction algorithms are key functions in various image processing applications such as medical images, remote sensing, and content-based image retrieval. The most common way to extract texture features is the use of Gray Level Co-occurrence Matrices (GLCMs). The GLCM contains the

  9. Classification of interstitial lung disease patterns with topological texture features

    CERN Document Server

    Huber, Markus B; Leinsinger, Gerda; Ray, Lawrence A; Wismüller, Axel; 10.1117/12.844318

    2010-01-01

    Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characteriza...

  10. Multiwavelets domain singular value features for image texture classification

    Institute of Scientific and Technical Information of China (English)

    RAMAKRISHNAN S.; SELVAN S.

    2007-01-01

    A new approach based on multiwavelets transformation and singular value decomposition (SVD) is proposed for the classification of image textures. Lower singular values are truncated based on its energy distribution to classify the textures in the presence of additive white Gaussian noise (AWGN). The proposed approach extracts features such as energy, entropy, local homogeneity and max-min ratio from the selected singular values of multiwavelets transformation coefficients of image textures.The classification was carried out using probabilistic neural network (PNN). Performance of the proposed approach was compared with conventional wavelet domain gray level co-occurrence matrix (GLCM) based features, discrete multiwavelets transformation energy based approach, and HMM based approach. Experimental results showed the superiority of the proposed algorithms when compared with existing algorithms.

  11. Image retrieval using both color and texture features

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In order to improve the retrieval performance of images, this paper proposes an efficient approach for extracting and retrieving color images. The block diagram of our proposed approach to content-based image retrieval (CBIR) is given firstly, and then we introduce three image feature extracting arithmetic including color histogram, edge histogram and edge direction histogram, the histogram Euclidean distance, cosine distance and histogram intersection are used to measure the image level similarity. On the basis of using color and texture features separately, a new method for image retrieval using combined features is proposed. With the test for an image database including 766 general-purpose images and comparison and analysis of performance evaluation for features and similarity measures, our proposed retrieval approach demonstrates a promising performance. Experiment shows that combined features are superior to every single one of the three features in retrieval.

  12. Classification of interstitial lung disease patterns with topological texture features

    Science.gov (United States)

    Huber, Markus B.; Nagarajan, Mahesh; Leinsinger, Gerda; Ray, Lawrence A.; Wismüller, Axel

    2010-03-01

    Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction. The best classification results were obtained by the MF features, which performed significantly better than all the standard GLCM and MD features (p interstitial lung diseases when compared to standard texture analysis methods.

  13. Ship Targets Discrimination Algorithm in SAR Images Based on Hu Moment Feature and Texture Feature

    Directory of Open Access Journals (Sweden)

    Liu Lei

    2016-01-01

    Full Text Available To discriminate the ship targets in SAR images, this paper proposed the method based on combination of Hu moment feature and texture feature. Firstly,7 Hu moment features should be extracted, while gray level co-occurrence matrix is then used to extract the features of mean, variance, uniformity, energy, entropy, inertia moment, correlation and differences. Finally the k-neighbour classifier was used to analysis the 15 dimensional feature vectors. The experimental results show that the method of this paper has a good effect.

  14. Parallel Feature Extraction System

    Institute of Scientific and Technical Information of China (English)

    MAHuimin; WANGYan

    2003-01-01

    Very high speed image processing is needed in some application specially for weapon. In this paper, a high speed image feature extraction system with parallel structure was implemented by Complex programmable logic device (CPLD), and it can realize image feature extraction in several microseconds almost with no delay. This system design is presented by an application instance of flying plane, whose infrared image includes two kinds of feature: geometric shape feature in the binary image and temperature-feature in the gray image. Accordingly the feature extraction is taken on the two kind features. Edge and area are two most important features of the image. Angle often exists in the connection of the different parts of the target's image, which indicates that one area ends and the other area begins. The three key features can form the whole presentation of an image. So this parallel feature extraction system includes three processing modules: edge extraction, angle extraction and area extraction. The parallel structure is realized by a group of processors, every detector is followed by one route of processor, every route has the same circuit form, and works together at the same time controlled by a set of clock to realize feature extraction. The extraction system has simple structure, small volume, high speed, and better stability against noise. It can be used in the war field recognition system.

  15. Cirrhosis Classification Based on Texture Classification of Random Features

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2014-01-01

    Full Text Available Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage. CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  16. Cirrhosis classification based on texture classification of random features.

    Science.gov (United States)

    Liu, Hui; Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  17. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...

  18. Ballistic missile precession frequency extraction by spectrogram's texture

    Science.gov (United States)

    Wu, Longlong; Xu, Shiyou; Li, Gang; Chen, Zengping

    2013-10-01

    In order to extract precession frequency, an crucial parameter in ballistic target recognition, which reflected the kinematical characteristics as well as structural and mass distribution features, we developed a dynamic RCS signal model for a conical ballistic missile warhead, with a log-norm multiplicative noise, substituting the familiar additive noise, derived formulas of micro-Doppler induced by precession motion, and analyzed time-varying micro-Doppler features utilizing time-frequency transforms, extracted precession frequency by measuring the spectrogram's texture, verified them by computer simulation studies. Simulation demonstrates the excellent performance of the method proposed in extracting the precession frequency, especially in the case of low SNR.

  19. Fingerprint Feature Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Mehala. G

    2014-03-01

    Full Text Available The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS. FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extracting true minutiae.

  20. Tongue Image Feature Extraction in TCM

    Institute of Scientific and Technical Information of China (English)

    LI Dong; DU Lian-xiang; LU Fu-ping; DU Jun-ping

    2004-01-01

    In this paper, digital image processing and computer vision techniques are applied to study tongue images for feature extraction with VC++ and Matlab. Extraction and analysis of the tongue surface features are based on shape, color, edge, and texture. The developed software has various functions and good user interface and is easy to use. Feature data for tongue image pattern recognition is provided, which form a sound basis for the future tongue image recognition.

  1. Fingerprint Feature Extraction Algorithm

    OpenAIRE

    Mehala. G

    2014-01-01

    The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE) algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS). FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extractin...

  2. Change detection in high resolution SAR images based on multiscale texture features

    Science.gov (United States)

    Wen, Caihuan; Gao, Ziqiang

    2011-12-01

    This paper studied on change detection algorithm of high resolution (HR) Synthetic Aperture Radar (SAR) images based on multi-scale texture features. Firstly, preprocessed multi-temporal Terra-SAR images were decomposed by 2-D dual tree complex wavelet transform (DT-CWT), and multi-scale texture features were extracted from those images. Then, log-ratio operation was utilized to get difference images, and the Bayes minimum error theory was used to extract change information from difference images. Lastly, precision assessment was done. Meanwhile, we compared with the result of method based on texture features extracted from gray-level cooccurrence matrix (GLCM). We had a conclusion that, change detection algorithm based on multi-scale texture features has a great more improvement, which proves an effective method to change detect of high spatial resolution SAR images.

  3. An extensive analysis of various texture feature extractors to detect Diabetes Mellitus using facial specific regions.

    Science.gov (United States)

    Shu, Ting; Zhang, Bob; Yan Tang, Yuan

    2017-04-01

    Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Relevance of echo-structure and texture features

    DEFF Research Database (Denmark)

    Karemore, Gopal; Mullick, Jhinuk Basu; KV, Dr. Rajagopal;

    2010-01-01

    Aim: Echostructure is an essential parameter for the evaluation of circumscribed lesions and can be described as a texture feature on ultrasound images. Present study evaluates the possibility of distinguishing between benign and malignant breast tumors using various texture features. Materials a...

  5. Driver Fatigue Features Extraction

    Directory of Open Access Journals (Sweden)

    Gengtian Niu

    2014-01-01

    Full Text Available Driver fatigue is the main cause of traffic accidents. How to extract the effective features of fatigue is important for recognition accuracy and traffic safety. To solve the problem, this paper proposes a new method of driver fatigue features extraction based on the facial image sequence. In this method, first, each facial image in the sequence is divided into nonoverlapping blocks of the same size, and Gabor wavelets are employed to extract multiscale and multiorientation features. Then the mean value and standard deviation of each block’s features are calculated, respectively. Considering the facial performance of human fatigue is a dynamic process that developed over time, each block’s features are analyzed in the sequence. Finally, Adaboost algorithm is applied to select the most discriminating fatigue features. The proposed method was tested on a self-built database which includes a wide range of human subjects of different genders, poses, and illuminations in real-life fatigue conditions. Experimental results show the effectiveness of the proposed method.

  6. Live facial feature extraction

    Institute of Scientific and Technical Information of China (English)

    ZHAO JieYu

    2008-01-01

    Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.

  7. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-04-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One of the key tools used for texture segmentation is local invariant descriptors. Texture consists of textons, the basic building block of textures, that may vary by small nuisances like illumination variation, deformations, and noise. Local invariant descriptors are robust to these nuisances making them beneficial for texture segmentation. However, grouping dense descriptors directly for segmentation presents a problem: existing descriptors aggregate data from neighborhoods that may contain different textured regions, making descriptors from these neighborhoods difficult to group, leading to significant errors in segmentation. This work addresses this issue by proposing dense local descriptors, called Shape-Tailored Features, which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Features and the regions. We present a framework based on variational methods. Extensive experiments on a new large texture dataset, which we introduce, show that the joint approach with Shape-Tailored Features leads to better segmentations over the non-joint non Shape-Tailored approach, and the method out-performs existing state-of-the-art.

  8. Identification of hazelnut fields using spectral and Gabor textural features

    Science.gov (United States)

    Reis, Selçuk; Taşdemir, Kadim

    2011-09-01

    Land cover identification and monitoring agricultural resources using remote sensing imagery are of great significance for agricultural management and subsidies. Particularly, permanent crops are important in terms of economy (mainly rural development) and environmental protection. Permanent crops (including nut orchards) are extracted with very high resolution remote sensing imagery using visual interpretation or automated systems based on mainly textural features which reflect the regular plantation pattern of their orchards, since the spectral values of the nut orchards are usually close to the spectral values of other woody vegetation due to various reasons such as spectral mixing, slope, and shade. However, when the nut orchards are planted irregularly and densely at fields with high slope, textural delineation of these orchards from other woody vegetation becomes less relevant, posing a challenge for accurate automatic detection of these orchards. This study aims to overcome this challenge using a classification system based on multi-scale textural features together with spectral values. For this purpose, Black Sea region of Turkey, the region with the biggest hazelnut production in the world and the region which suffers most from this issue, is selected and two Quickbird archive images (June 2005 and September 2008) of the region are acquired. To differentiate hazel orchards from other woodlands, in addition to the pansharpened multispectral (4-band) bands of 2005 and 2008 imagery, multi-scale Gabor features are calculated from the panchromatic band of 2008 imagery at four scales and six orientations. One supervised classification method (maximum likelihood classifier, MLC) and one unsupervised method (self-organizing map, SOM) are used for classification based on spectral values, Gabor features and their combination. Both MLC and SOM achieve the highest performance (overall classification accuracies of 95% and 92%, and Kappa values of 0.93 and 0

  9. Classification of High Resolution C-Band PolSAR Data on Polarimetric and Texture Features

    Science.gov (United States)

    Zhao, Lei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Li, Lan

    2014-11-01

    PolSAR image classification is an important technique in the remote sensing area. For high resolution PolSAR image, polarimetric and texture features are equally important for the high resolution PolSAR image classification. The texture features are mainly extracted through Gray Level Co-occurrence Matrix (GLCM) method, but this method has some deficiencies. First, GLCM method can only work on gray-scale images; Secondly, the number of texture features extracted by GLCM method is generally up dozens, or even hundreds. Too many features may exist larger redundancy and will increase the complexity of classification. Therefore, this paper introduces a new texture feature factor-RK that derived from PolSAR image non-Gaussian statistic model.Using the domestic airborne C-band PolSAR image data, we completed classification combined the polarization and texture characteristics.The results showed that this new texture feature factor-RK can overcome the above drawbacks and can achieve same performance compared with GLCM method.

  10. Ethnicity distinctiveness through iris texture features using Gabor filters

    CSIR Research Space (South Africa)

    Mabuza-Hocquet, Gugulethu P

    2017-02-01

    Full Text Available and ethnicity. Researchers have reported that iris texture features contain information that is inclined to human genetics and is highly discriminative between different eyes of different ethnicities. This work applies image processing and machine learning...

  11. Unsupervised Skin cancer detection by combination of texture and shape features in dermoscopy images

    Directory of Open Access Journals (Sweden)

    Hamed aghapanah rudsari

    2014-05-01

    Full Text Available In this paper a novel unsupervised feature extraction method for detection of melanoma in skin images is presented. First of all, normal skin surrounding the lesion is removed in a segmentation process. In the next step, some shape and texture features are extracted from the output image of the first step: GLCM, GLRLM, the proposed directional-frequency features, and some parameters of Ripplet transform are used as texture features; Also, NRL features and Zernike moments are used as shape features. Totally, 63 texture features and 31 shape features are extracted. Finally, the number of extracted features is reduced using PCA method and a proposed method based on Fisher criteria. Extracted features are classified using the Perceptron Neural Networks, Support Vector Machine, 4-NN, and Naïve Bayes. The results show that SVM has the best performance. The proposed algorithm is applied on a database that consists of 160 labeled images. The overall results confirm the superiority of the proposed method in both accuracy and reliability over previous works.

  12. Comparison of texture features based on Gabor filters

    NARCIS (Netherlands)

    Grigorescu, Simona E.; Petkov, Nicolai; Kruizinga, Peter

    2002-01-01

    Texture features that are based on the local power spectrum obtained by a bank of Gabor filters are compared. The features differ in the type of nonlinear post-processing which is applied to the local power spectrum. The following features are considered: Gabor energy, complex moments, and grating c

  13. AUTOMATIC SHIP DETECTION IN SINGLE-POL SAR IMAGES USING TEXTURE FEATURES IN ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    E. Khesali

    2015-12-01

    Full Text Available This paper presents a novel method for detecting ships from high-resolution synthetic aperture radar (SAR images. This method categorizes ship targets from single-pol SAR images using texture features in artificial neural networks. As such, the method tries to overcome the lack of an operational solution that is able to reliably detect ships with one SAR channel. The method has the following three main stages: 1 feature extraction; 2 feature selection; and 3 ship detection. The first part extracts different texture features from SAR image. These textures include occurrence and co occurrence measures with different window sizes. Then, best features are selected. Finally, the artificial neural network is used to extract ship pixels from sea ones. In post processing stage some morphological filters are used to improve the result. The effectiveness of the proposed method is verified using Sentinel-1 data in VV polarization. Experimental results indicate that the proposed algorithm can be implemented with time-saving, high precision ship extraction, feature analysis, and detection. The results also show that using texture features the algorithm properly discriminates speckle noise from ships.

  14. Classifying Cyst and Tumor Lesion Using Support Vector Machine Based on Dental Panoramic Images Texture Features

    OpenAIRE

    Nurtanio, Ingrid

    2013-01-01

    Dental radiographs are essential in diagnosing the pathology of the jaw. However, similar radiographic appearance of jaw lesions causes difficulties in differentiating cyst from tumor. Therefore, we conducted a development of computer-aided classification system for cyst and tumor lesions in dental panoramic images. The proposed system consists of feature extraction based on texture using the first-order statistics texture (FO), Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run ...

  15. Modeling forest aboveground biomass by combining spectrum, textures and topographic features

    Institute of Scientific and Technical Information of China (English)

    Mingshi LI; Ying TAN; Jie PAN; Shikui PENG

    2008-01-01

    Many textural measures have been developed and used for improving land cover classification accu-racy, but they rarely examined the role of textures in improving the performance of forest aboveground biomass estimations. The relationship between texture and biomass is poorly understood. In this paper, SPOT5 HRG datasets were ortho-rectified and atmospherically calibrated. Then the transform of spectral features is introduced, and the extraction of textural measures based on the Gray Level Co-occurrence Matrix is also implemented in accordance with four different directions (0°, 45°, 90o and 135°) and various moving window sizes, ranging from 3 x 3 to 51 x 51. Thus, a variety of textures were generated. Combined with derived topo-graphic features, the forest aboveground biomass estima-tion models for five predominant forest types in the scenic spot of the Mausoleum of Sun Yat-Sen, Nanjing, are identified and constructed, and the estimation accuracies exhibited by these models are also validated and evaluated respectively. The results indicate that: 1) Most textures are weakly correlated with forest biomass, but minority textural measures such as ME, CR and VA play a significantly effective and critical role in estimating forest biomass; 2) The textures of coniferous forest appear preferable to those of broad-leaved forest and mixed forest in representing the spatial configurations of forests;and 3) Among the topographic features including slope,aspect and elevation,aspect has the lowest correlation with the biomass of a forest in this study.

  16. 引入纹理特征的多光谱遥感影像海面油膜信息提取%Oil spill information extraction based on textural features and multispectral image

    Institute of Scientific and Technical Information of China (English)

    王晶; 刘湘南

    2013-01-01

    针对单纯依靠光谱特征油膜提取精度低、雷达影像油膜提取易受海况条件及假目标影响的问题,提出了一种结合光谱特征与纹理特征的多光谱遥感影像油膜信息提取方法.以2011年6月蓬莱19-3油田溢油事故为研究对象,选用HJ-1星CCD遥感数据,利用灰度共生矩阵获取影像纹理特征,采用SVM模型对结合纹理特征与光谱特征的影像进行分类,提取出研究区油膜信息,并将分类提取结果与仅依靠光谱特征的SVM模型分类结果进行了比较.结果表明:引入纹理特征的SVM模型分类总精度达到90.29%,比仅依靠光谱特征的分类精度提高了12.41%;纹理特征的参与降低了原影像噪声对分类结果的影响,油膜边缘提取更加清晰,油膜中心呈连续面状分布,引入纹理特征的SVM模型可有效地用于多光谱遥感影像海面油膜信息提取.%The existing methods of oil spill information extraction have many problems.For example,extraction only based on spectral characteristics is difficult to obtain high accuracy,and sea conditions and false targets have seriously influence on the study that depends on radar data.A model combined with textural features and spectral characteristics based on the support vector machine (SV M)classification was designed to extract oil spill information,using H J-1 optical satellite image of Penglai 19-3 oil spill accident in 2011 as study data.At first,textural features were calculated through gray-level cooccurrence matrix,then the model was used to classify and analyze oil spill information extraction accuracy by comparing it with single spectral characteristics classification.The total classification accuracy of the former method has risen to 90.29 %,which was 12.41% higher than the later.Therefore,using this method can reduce noise information and improve the precision of classification.In addition,the marginal area of oil spill looks more clearly and the central area

  17. Preliminary study report: topological texture features extracted from standard radiographs of the heel bone are correlated with femoral bone mineral density

    Science.gov (United States)

    Boehm, H. F.; Lutz, J.; Koerner, M.; Notohamiprodjo, M.; Reiser, M.

    2009-02-01

    With the growing number of eldery patients in industrialized nations the incidence of geriatric, i.e. osteoporotic fractures is steadily on the rise. It is of great importance to understand the characteristics of hip fractures and to provide diagnostic tests for the assessment of an individual's fracture-risk that allow to take preventive action and give therapeutic advice. At present, bone-mineral-density (BMD) obtained from DXA (dual-energy x-ray-absorptiometry) is the clinical standard of reference for diagnosis and follow-up of osteoporosis. Since availability of DXA - other than that of clinical X-ray imaging - is usually restricted to specialized medical centers it is worth trying to implement alternative methods to estimate an individual's BMD. Radiographs of the peripheral skeleton, e.g. the ankle, range among the most ordered diagnostic procedures in surgery for exclusion or confirmation of fracture. It would be highly beneficial if - as a by-product of conventional imaging - one could obtain a quantitative parameter that is closely correlated with femoral BMD in addition to the original diagnostic information, e.g. fracture status at the peripheral site. Previous studies could demonstrate a correlation between calcaneal BMD and osteoporosis. The objective of our study was to test the hypothesis that topological analysis of calcaneal bone texture depicted by a lateral x-ray projection of the ankle allows to estimate femoral BMD. Our analysis on 34 post-menopausal patients indicate that texture properties based on graylevel topology in calcaneal x-ray-films are closely correlated with BMD at the hip and may qualify as a substitute indicator of femoral fracture risk.

  18. Content-Based Image Retrieval using Color Moment and Gabor Texture Feature

    Directory of Open Access Journals (Sweden)

    K. Hemachandran

    2012-09-01

    Full Text Available Content based image retrieval (CBIR has become one of the most active research areas in the past few years. Many indexing techniques are based on global feature distributions. However, these global distributions have limited discriminating power because they are unable to capture local image information. In this paper, we propose a content-based image retrieval method which combines color and texture features. To improve the discriminating power of color indexing techniques, we encode a minimal amount of spatial information in the color index. As its color features, an image is divided horizontally into three equal non-overlapping regions. From each region in the image, we extract the first three moments of the color distribution, from each color channel and store them in the index i.e., for a HSV color space, we store 27 floating point numbers per image. As its texture feature, Gabor texture descriptors are adopted. We assign weights to each feature respectively and calculate the similarity with combined features of color and texture using Canberra distance as similarity measure. Experimental results show that the proposed method has higher retrieval accuracy than other conventional methods combining color moments and texture features based on global features approach.

  19. Second order Statistical Texture Features from a New CSLBPGLCM for Ultrasound Kidney Images Retrieval

    Directory of Open Access Journals (Sweden)

    Chelladurai CALLINS CHRISTIYANA

    2013-12-01

    Full Text Available This work proposes a new method called Center Symmetric Local Binary Pattern Grey Level Co-occurrence Matrix (CSLBPGLCM for the purpose of extracting second order statistical texture features in ultrasound kidney images. These features are then feed into ultrasound kidney images retrieval system for the point of medical applications. This new GLCM matrix combines the benefit of CSLBP and conventional GLCM. The main intention of this CSLBPGLCM is to reduce the number of grey levels in an image by not simply accumulating the grey levels but incorporating another statistical texture feature in it. The proposed approach is cautiously evaluated in ultrasound kidney images retrieval system and has been compared with conventional GLCM. It is experimentally proved that the proposed method increases the retrieval efficiency, accuracy and reduces the time complexity of ultrasound kidney images retrieval system by means of second order statistical texture features.

  20. 利用主成分分析提取人类肝纤维化声像图的纹理特征%The extraction of hepatic fibrosis ultrasonic texture features based on principal component analysis

    Institute of Scientific and Technical Information of China (English)

    陈明丽; 陈亚青; 朱云开

    2014-01-01

    Objective To explore the efficacy of principal component analysis (PCA)in extracting texture features from hepatic fibrosis sonograms. Methods Ultrasonography was performed in 186 patients with chronic hepatitis B who underwent liver biopsies and serum tests. Fourteen texture parameters of gray level co-occurrence matrix (GLCM)were extracted from each standard sonogram. Liver fibrosis was staged from S0 to S4 by histopathology. Principal components wereextractedbyPCAfrom14texturefeaturesparametersofGLCMin186humansonogramswith5hepaticfibrosisstages. Correct classification rates of sonograms by the discriminate analysis models on 2 sets of parameters were compared. Results Three principal components(eigenvalues>1)were obtained which could explain 96.12% texture features of sonograms. Cross-validation test manifested correct classification rates were 55.9% and 60.8% by 2 discriminate analysis models established on 3 principal components and 14 initial parameters respectively. Conclusion Principal components extracted by PCA could reduce data quantity with similar classified precision.%目的:探讨利用主成分分析(principal component analysis,PCA)的方法提取人类肝脏纤维化声像图纹理特征的价值。方法采集186例有肝脏组织穿刺病理肝纤维化分期(S0~S4)结果的慢性乙型肝炎患者的标准化声像图,提取声像图纹理的灰度共生矩阵(gray level co-occurrence matrix,GLCM)参数。采用PCA 对5类186幅人肝纤维化声像图纹理的14个GLCM参数进行分析,从中提取主要成分。分别利用这两套参数建立判别分析模型对肝纤维化声像图进行分类。结果获得的3个主要成分对人肝纤维化声像图纹理解释的累计贡献率为96.12%。交互检验表明建立在3个主要成分和14个原始参数基础上的判别分析模型分别能够对55.9%和60.8%的病例进行准确分类。结论采用PCA提取的主成分不仅能减少数据量,

  1. A CAD System for Lesion Detection in Cervigram Based on Laws Textural Feature

    Directory of Open Access Journals (Sweden)

    RamaPraba P.S

    2014-01-01

    Full Text Available Cervical cancer is the second most common cancer among the women worldwide. A computer aided diagnosis system can help colposcopist to analyze cervical images more accurately. This work aims to detect lesion in cervical images based on Laws textural feature and nearest neighbor classifier and it can be used as a diagnostic tool. The images used for the detection of cervical cancer are taken by using colposcope which magnifies the cells of cervix. The Laws textural features are extracted from the cervical images and input to nearest neighbor classifier. A totally 240 images are used for the evaluation and an overall accuracy of 96% is obtained.

  2. Segmentation and Classification of Skin Lesions Based on Texture Features

    Directory of Open Access Journals (Sweden)

    B.Gohila vani

    2014-04-01

    Full Text Available Skin cancer is the most common type of cancer and represents 50% all new cancers detected each year. The deadliest form of skin cancer is melanoma and its incidence has been rising at a rate of 3% per year. Due to the costs for dermatologists to monitor every patient, there is a need for an computerized system to evaluate a patient‘s risk of melanoma using images of their skin lesions captured using a standard digital camera. In Proposed method, a novel texture-based skin lesion segmentation algorithm is used and to classify the stages of skin cancer using probabilistic neural network. Probabilistic neural network will give better performance in this system to detect a lot of stages in skin lesion. To extract the characteristics from various skin lesions and its united features gives better classification with new approached probabilistic neural network. There are five different skin lesions commonly grouped as Actinic Keratosis (AK, Basal Cell Carcinoma (BCC, Melanocytic Nevus / Mole (ML, Squamous Cell Carcinoma (SCC, Seborrhoeic Keratosis (SK. The system will be used to classify the queried images automatically to decide the stages of abnormality. The lesion diagnosis system involves two stages of process such as training and classification. Feature selection is used in the classified framework that chooses the most relevant feature subsets at each node of the hierarchy. An automatic classifier will be used for classification based on learning with some training samples of each stage. The accuracy of the proposed neural scheme is higher in discriminating cancer and pre-malignant lesions from benign skin lesions, and it attains an total classification accuracy is high of skin lesions.

  3. Blurred face recognition by fusing blur-invariant texture and structure features

    Science.gov (United States)

    Zhu, Mengyu; Cao, Zhiguo; Xiao, Yang; Xie, Xiaokang

    2015-10-01

    Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.

  4. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional featur...

  5. Accuracy and variability of texture-based radiomics features of lung lesions across CT imaging conditions

    Science.gov (United States)

    Zheng, Yuese; Solomon, Justin; Choudhury, Kingshuk; Marin, Daniele; Samei, Ehsan

    2017-03-01

    Texture analysis for lung lesions is sensitive to changing imaging conditions but these effects are not well understood, in part, due to a lack of ground-truth phantoms with realistic textures. The purpose of this study was to explore the accuracy and variability of texture features across imaging conditions by comparing imaged texture features to voxel-based 3D printed textured lesions for which the true values are known. The seven features of interest were based on the Grey Level Co-Occurrence Matrix (GLCM). The lesion phantoms were designed with three shapes (spherical, lobulated, and spiculated), two textures (homogenous and heterogeneous), and two sizes (diameter Kyoto Kagaku) and imaged using a commercial CT system (GE Revolution) at three CTDI levels (0.67, 1.42, and 5.80 mGy), three reconstruction algorithms (FBP, IR-2, IR-4), four reconstruction kernel types (standard, soft, edge), and two slice thicknesses (0.6 mm and 5 mm). Another repeat scan was performed. Texture features from these images were extracted and compared to the ground truth feature values by percent relative error. The variability across imaging conditions was calculated by standard deviation across a certain imaging condition for all heterogeneous lesions. The results indicated that the acquisition method has a significant influence on the accuracy and variability of extracted features and as such, feature quantities are highly susceptible to imaging parameter choices. The most influential parameters were slice thickness and reconstruction kernels. Thin slice thickness and edge reconstruction kernel overall produced more accurate and more repeatable results. Some features (e.g., Contrast) were more accurately quantified under conditions that render higher spatial frequencies (e.g., thinner slice thickness and sharp kernels), while others (e.g., Homogeneity) showed more accurate quantification under conditions that render smoother images (e.g., higher dose and smoother kernels). Care

  6. Optimal features selection based on circular Gabor filters and RSE in texture segmentation

    Science.gov (United States)

    Wang, Qiong; Liu, Jian; Tian, Jinwen

    2007-12-01

    This paper designs the circular Gabor filters incorporating into the human visual characteristics, and the concept of mutual information entropy in rough set is introduced to evaluate the effect of the features extracted from different filters on clustering, redundant features are got rid of, Experimental results indicate that the proposed algorithm outperforms conventional approaches in terms of both objective measurements and visual evaluation in texture segmentation.

  7. Influence of texture feature size on spherical silicon solar cells

    Institute of Scientific and Technical Information of China (English)

    HAYASHI Shota; MINEMOTO Takashi; TAKAKURA Hideyuki; HAMAKAWA Yoshihiro

    2006-01-01

    The effects of surface texturing on spherical silicon solar cells were investigated. Surface texturing for spherical Si solar cells was prepared by immersing p-type spherical Si crystals in KOH solution with stirring. Two kinds of texture feature sizes (1 and 5μm pyramids) were prepared by changing stirring speed. After fabrication through our baseline processes, these cells were evaluated by solar cell performance and external quantum efficiency. The cell with 1 and 5μm pyramids shows the short circuit current density ( Jsc ) value of 31.9 and 33.2 mA·cm-2 , which is 9% and 13% relative increase compared to the cell without texturing. Furthermore, the cell with 5 μm pyramids has a higher open-circuit voltage (0.589 V) than the cell with 1 μm pyramids (0.577 V). As a result, the conversion efficiency was improved from 11.4% for the cell without texturing to 12.1% for the cell with 5 μm pyramids.

  8. Texture feature selection with relevance learning to classify interstitial lung disease patterns

    Science.gov (United States)

    Huber, Markus B.; Bunte, Kerstin; Nagarajan, Mahesh B.; Biehl, Michael; Ray, Lawrence A.; Wismueller, Axel

    2011-03-01

    The Generalized Matrix Learning Vector Quantization (GMLVQ) is used to estimate the relevance of texture features in their ability to classify interstitial lung disease patterns in high-resolution computed tomography (HRCT) images. After a stochastic gradient descent, the GMLVQ algorithm provides a discriminative distance measure of relevance factors, which can account for pairwise correlations between different texture features and their importance for the classification of healthy and diseased patterns. Texture features were extracted from gray-level co-occurrence matrices (GLCMs), and were ranked and selected according to their relevance obtained by GMLVQ and, for comparison, to a mutual information (MI) criteria. A k-nearest-neighbor (kNN) classifier and a Support Vector Machine with a radial basis function kernel (SVMrbf) were optimized in a 10-fold crossvalidation for different texture feature sets. In our experiment with real-world data, the feature sets selected by the GMLVQ approach had a significantly better classification performance compared with feature sets selected by a MI ranking.

  9. Texture segmentation via nonlinear interactions among Gabor feature pairs

    Science.gov (United States)

    Tang, Hak W.; Srinivasan, Venugopal; Ong, Sim-Heng

    1995-01-01

    Segmentation of an image based on texture can be performed by a set of N Gabor filters that uniformly covers the spatial frequency domain. The filter outputs that characterize the frequency and orientation content of the intensity distribution in the vicinity of a pixel constitute an N-element feature vector. As an alternative to the computationally intensive procedure of segmentation based on the N-element vectors generated at each pixel, we propose an algorithm for selecting a pair of filters that provides maximum discrimination between two textures constituting the object and its surroundings in an image. Images filtered by the selected filters are nonlinearity transformed to produce two feature maps. The feature maps are smoothed by an intercompetitive and intracooperative interaction process between them. These interactions have proven to be much superior to simple Gaussian filtering in reducing the effects of spatial variability of feature maps. A segmented binary image is then generated by a pixel-by-pixel comparison of the two maps. Results of experiments involving several texture combinations show that this procedure is capable of producing clean segmentation.

  10. Mean shift texture surface detection based on WT and COM feature image selection

    Institute of Scientific and Technical Information of China (English)

    HAN Yan-fang; SHI Peng-fei

    2006-01-01

    Mean shift is a widely used clustering algorithm in image segmentation. However, the segmenting results are not so good as expected when dealing with the texture surface due to the influence of the textures. Therefore, an approach based on wavelet transform (WT), co-occurrence matrix (COM) and mean shift is proposed in this paper. First, WT and COM are employed to extract the optimal resolution approximation of the original image as feature image. Then, mean shift is successfully used to obtain better detection results. Finally, experiments are done to show this approach is effective.

  11. Novel Methods for Separation of Gangue from Limestone and Coal using Multispectral and Joint Color-Texture Features

    Science.gov (United States)

    Tripathy, Debi Prasad; Guru Raghavendra Reddy, K.

    2017-04-01

    Ore sorting is a useful tool to remove gangue material from the ore and increase the quality of the ore. The vast developments in the area of artificial intelligence allow fast processing of full-color digital images for the preferred investigations. The associated gangue minerals from limestone and coal mines were identified using three different approaches. All the methods were based on extensions of the co-occurrence matrix method. In the first method, the color features were extracted from RGB color planes and texture features were extracted using a multispectral extension, in which co-occurrence matrices were computed both between and within the color bands. The second method used joint color-texture features where color features were added to gray scale texture features. The last method used gray scale texture features computed on a quantized color image. Results showed that the accuracy for separation of gangue from limestone, a joint color-texture method was 98 % and for separation of gangue from coal, multispectral method with correlation and joint color-texture method were 100 % respectively. Combined multispectral and joint color-texture methods gave good accuracy with 64 gray levels quantization for separation of gangue from limestone and coal.

  12. Textural feature selection for enhanced detection of stationary humans in through-the-wall radar imagery

    Science.gov (United States)

    Chaddad, A.; Ahmad, F.; Amin, M. G.; Sevigny, P.; DiFilippo, D.

    2014-05-01

    Feature-based methods have been recently considered in the literature for detection of stationary human targets in through-the-wall radar imagery. Specifically, textural features, such as contrast, correlation, energy, entropy, and homogeneity, have been extracted from gray-level co-occurrence matrices (GLCMs) to aid in discriminating the true targets from multipath ghosts and clutter that closely mimic the target in size and intensity. In this paper, we address the task of feature selection to identify the relevant subset of features in the GLCM domain, while discarding those that are either redundant or confusing, thereby improving the performance of feature-based scheme to distinguish between targets and ghosts/clutter. We apply a Decision Tree algorithm to find the optimal combination of co-occurrence based textural features for the problem at hand. We employ a K-Nearest Neighbor classifier to evaluate the performance of the optimal textural feature based scheme in terms of its target and ghost/clutter discrimination capability and use real-data collected with the vehicle-borne multi-channel through-the-wall radar imaging system by Defence Research and Development Canada. For the specific data analyzed, it is shown that the identified dominant features yield a higher classification accuracy, with lower number of false alarms and missed detections, compared to the full GLCM based feature set.

  13. Unified Saliency Detection Model Using Color and Texture Features.

    Science.gov (United States)

    Zhang, Libo; Yang, Lin; Luo, Tiejian

    2016-01-01

    Saliency detection attracted attention of many researchers and had become a very active area of research. Recently, many saliency detection models have been proposed and achieved excellent performance in various fields. However, most of these models only consider low-level features. This paper proposes a novel saliency detection model using both color and texture features and incorporating higher-level priors. The SLIC superpixel algorithm is applied to form an over-segmentation of the image. Color saliency map and texture saliency map are calculated based on the region contrast method and adaptive weight. Higher-level priors including location prior and color prior are incorporated into the model to achieve a better performance and full resolution saliency map is obtained by using the up-sampling method. Experimental results on three datasets demonstrate that the proposed saliency detection model outperforms the state-of-the-art models.

  14. Psoriasis Detection Using Skin Color and Texture Features

    Directory of Open Access Journals (Sweden)

    Nidhal K.A. Abbadi

    2010-01-01

    Full Text Available Problem statement: In this study a skin disease diagnosis system was developed and tested. The system was used for diagnosis of psoriases skin disease. Approach: Present study relied on both skin color and texture features (features derives from the GLCM to give a better and more efficient recognition accuracy of skin diseases. We used feed forward neural networks to classify input images to be psoriases infected or non psoriasis infected. Results: The system gave very encouraging results during the neural network training and generalization face. Conclusion: The aim of this worked to evaluate the ability of the proposed skin texture recognition algorithm to discriminate between healthy and infected skins and we took the psoriasis disease as example.

  15. FEATURE FUSION TECHNIQUE FOR COLOUR TEXTURE CLASSIFICATION SYSTEM BASED ON GRAY LEVEL CO-OCCURRENCE MATRIX

    OpenAIRE

    Shunmuganathan, K. L.; A. Suresh

    2012-01-01

    In this study, an efficient feature fusion based technique for the classification of colour texture images in VisTex album is presented. Gray Level Co-occurrence Matrix (GLCM) and its associated texture features contrast, correlation, energy and homogeneity are used in the proposed approach. The proposed GLCM texture features are obtained from the original colour texture as well as the first non singleton dimension of the same image. These features are fused at feature level to classify the c...

  16. Feature Extraction Using Mfcc

    Directory of Open Access Journals (Sweden)

    Shikha Gupta

    2013-08-01

    Full Text Available Mel Frequency Ceptral Coefficient is a very common and efficient technique for signal processing. Thispaper presents a new purpose of working with MFCC by using it for Hand gesture recognition. Theobjective of using MFCC for hand gesture recognition is to explore the utility of the MFCC for imageprocessing. Till now it has been used in speech recognition, for speaker identification. The present systemis based on converting the hand gesture into one dimensional (1-D signal and then extracting first 13MFCCs from the converted 1-D signal. Classification is performed by using Support Vector Machine.Experimental results represents that proposed application of using MFCC for gesture recognition havevery good accuracy and hence can be used for recognition of sign language or for other householdapplication with the combination for other techniques such as Gabor filter, DWT to increase the accuracyrate and to make it more efficient.

  17. FEATURE FUSION TECHNIQUE FOR COLOUR TEXTURE CLASSIFICATION SYSTEM BASED ON GRAY LEVEL CO-OCCURRENCE MATRIX

    Directory of Open Access Journals (Sweden)

    K. L. Shunmuganathan

    2012-01-01

    Full Text Available In this study, an efficient feature fusion based technique for the classification of colour texture images in VisTex album is presented. Gray Level Co-occurrence Matrix (GLCM and its associated texture features contrast, correlation, energy and homogeneity are used in the proposed approach. The proposed GLCM texture features are obtained from the original colour texture as well as the first non singleton dimension of the same image. These features are fused at feature level to classify the colour texture image using nearest neighbor classifier. The results demonstrate that the proposed fusion of difference image GLCM features is much more efficient than the original GLCM features.

  18. Using Both HSV Color and Texture Features to Classify Archaeological Fragments

    Directory of Open Access Journals (Sweden)

    Nada A. Rasheed

    2015-08-01

    Full Text Available Normally, the artifacts are found in a fractured state and mixed randomly and the process of manual classification may requires a great deal of time and tedious work. Therefore, classifying these fragments is a challenging task, especially if the archaeological object consists of thousands of fragments. Hence, it is important to come up with a solution for the classification of the archaeological fragments accurately into groups and reassembling each group to original form by using computer techniques. In this study we interested to find the solve to this problem depending on color and texture features, to accomplish that the algorithm begins by partition the image into six sub-blocks. Furthermore, extract HSV color space feature from each block, then this feature represent into a cumulative histogram, as a result we obtain six vectors for each image. Regard to extract the texture feature for each sub-block it will be used the Gray Level Co-occurrence Matrix (GLCM that include Energy, Contrast, Correlation and Homogeneity. At the final stage, based on k-Nearest Neighbors algorithm (KNN classifies the color and texture features, this method able to classify the fragments with a high accuracy. The algorithm was tested on several images of pottery fragments and yield results with accuracy as high as 86.51% of original grouped cases correctly classified.

  19. The effects of TIS and MI on the texture features in ultrasonic fatty liver images

    Science.gov (United States)

    Zhao, Yuan; Cheng, Xinyao; Ding, Mingyue

    2017-03-01

    Nonalcoholic fatty liver disease (NAFLD) is prevalent and has a worldwide distribution now. Although ultrasound imaging technology has been deemed as the common method to diagnose fatty liver, it is not able to detect NAFLD in its early stage and limited by the diagnostic instruments and some other factors. B-scan image feature extraction of fatty liver can assist doctor to analyze the patient's situation and enhance the efficiency and accuracy of clinical diagnoses. However, some uncertain factors in ultrasonic diagnoses are often been ignored during feature extraction. In this study, the nonalcoholic fatty liver rabbit model was made and its liver ultrasound images were collected by setting different Thermal index of soft tissue (TIS) and mechanical index (MI). Then, texture features were calculated based on gray level co-occurrence matrix (GLCM) and the impacts of TIS and MI on these features were analyzed and discussed. Furthermore, the receiver operating characteristic (ROC) curve was used to evaluate whether each feature was effective or not when TIS and MI were given. The results showed that TIS and MI do affect the features extracted from the healthy liver, while the texture features of fatty liver are relatively stable. In addition, TIS set to 0.3 and MI equal to 0.9 might be a better choice when using a computer aided diagnosis (CAD) method for fatty liver recognition.

  20. Improving the precision of CBIR systems by color and texture feature adaptation using GSA

    Directory of Open Access Journals (Sweden)

    E. Rashedi

    2013-12-01

    Full Text Available Content-based image retrieval, CBIR, is an interesting problem of pattern recognition. This paper is devoted to the presentation an approach to reduce the semantic gap between low level visual features and high level semantics by parameter adaptation in feature extraction sub-block. In the proposed method, GSA is used. In texture feature extraction, the parameters of a 6-tap parametrized orthogonal mother wavelet and in color feature extraction, the quantization levels are adapted to reach maximum precision of the image retrieval system. Experimental results and comparison with the conventional CBIR system are reported on a database of 1000 images. Results confirm the efficiency of the proposed adapted image retrieval system.

  1. AUTOMATED DETECTION OF SKIN DISEASES USING TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    DR.RANJAN PAREKH

    2011-06-01

    Full Text Available This paper proposes an automated system for recognizing disease conditions of human skin in context to health informatics. The disease conditions are recognized by analyzing skin texture images using a set of normalized symmetrical Grey Level Co-occurrence Matrices (GLCM. GLCM defines the probability of grey level i occurring in the neighborhood of another grey level j at a distance d in direction θ. Directional GLCMs are computed along four directions: horizontal (θ = 0º, vertical (θ = 90º, right diagonal (θ = 45º and left diagonal (θ= 135º, and a set of features computed from each, are averaged to provide an estimation of the texture class.The system is tested using 180 images pertaining to three dermatological skin conditions viz. Dermatitis, Eczema, Urticaria. An accuracy of 96.6% is obtained using a multilayer perceptron (MLP as a classifier.

  2. Feature extraction using fractal codes

    NARCIS (Netherlands)

    Schouten, Ben; Zeeuw, Paul M. de

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  3. Feature Extraction Using Fractal Codes

    NARCIS (Netherlands)

    Schouten, B.A.M.; Zeeuw, P.M. de

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  4. Classification of High Resolution C-Band PolSAR Data Based on Polarimetric and Texture Features

    Science.gov (United States)

    Zhao, Lei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Li, Lan

    2014-11-01

    PolSAR image classification is an important technique in the remote sensing area. For high resolution PolSAR image, polarimetric and texture features are equally important for the high resolution PolSAR image classification. The texture features are mainly extracted through Gray Level Co-occurrence Matrix (GLCM) method, but this method has some deficiencies. First, GLCM method can only work on gray-scale images; Secondly, the number of texture features extracted by GLCM method is generally up dozens, or even hundreds. Too many features may exist larger redundancy and will increase the complexity of classification. Therefore, this paper introduces a new texture feature factor-RK that derived from PolSAR image non-Gaussian statistic model. Using the domestic airborne C-band PolSAR image data, we completed classification combined the polarization and texture characteristics. The results showed that this new texture feature factor-RK can overcome the above drawbacks and can achieve same performance compared with GLCM method.

  5. PERFORMANCE ANALYSIS OF GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURE FEATURES FOR GLAUCOMA DIAGNOSIS

    Directory of Open Access Journals (Sweden)

    Sakthivel Karthikeyan

    2014-01-01

    Full Text Available Glaucoma is a multifactorial optic neuropathy disease characterized by elevated Intra Ocular Pressure (IOP. As the visual loss caused by the disease is irreversible, early detection is essential. Fundus images are used as input and it is preprocessed using histogram equalization. First order features from histogram and second order features from Gray Level Co-occurrence Matrix (GLCM are extracted from the preprocessed image as textural features reflects physiological changes in the fundus images. Second order textural features are extracted for different quantization levels namely 8, 16, 32, 64, 128 and 256 in four orientations viz 0, 45, 90 and 135° for various distances. Extracted features are selected using Sequential Forward Floating Selection (SFFS technique.The selected features are fed to Back Propagation Network (BPN for classification as normal and abnormal images. The proposed computer aided diagnostic system achieved 96% sensitivity, 94% specificity, 95% accuracy and can be used for screening purposes. In this study, the analysis of gray levels have shown their significance in the classification of glaucoma.

  6. Effect of zooming on texture features of ultrasonic images

    Directory of Open Access Journals (Sweden)

    Kyriacou Efthyvoulos

    2006-01-01

    Full Text Available Abstract Background Unstable carotid plaques on subjective, visual, assessment using B-mode ultrasound scanning appear as echolucent and heterogeneous. Although previous studies on computer assisted plaque characterisation have standardised B-mode images for brightness, improving the objective assessment of echolucency, little progress has been made towards standardisation of texture analysis methods, which assess plaque heterogeneity. The aim of the present study was to investigate the influence of image zooming during ultrasound scanning on textural features and to test whether or not resolution standardisation decreases the variability introduced. Methods Eighteen still B-mode images of carotid plaques were zoomed during carotid scanning (zoom factor 1.3 and both images were transferred to a PC and normalised. Using bilinear and bicubic interpolation, the original images were interpolated in a process of simulating off-line zoom using the same interpolation factor. With the aid of the colour-coded image, carotid plaques of the original, zoomed and two resampled images for each case were outlined and histogram, first order and second order statistics were subsequently calculated. Results Most second order statistics (21/25, 84% were significantly (p Conclusion Texture analysis of ultrasonic plaques should be performed under standardised resolution settings; otherwise a resolution normalisation algorithm should be applied.

  7. Automatic Detection of Tumor in Wireless Capsule Endoscopy Images Using Energy Based Textural Features and SVM Based RFE Approach

    Directory of Open Access Journals (Sweden)

    B. Ashokkumar

    2014-04-01

    Full Text Available This paper deals with processing of wireless capsule endoscopy (WCE images from gastrointestinal tract, by extracting textural features and developing a suitable classifier to recognize as a normal or abnormal /tumor image. Images obtained from WCE are prone to noise. To reduce the noise, filtration technique is used. The quality of the filtered image is degraded, so to enhance the quality of the image, discrete wavelet transform (DWT is used. The textural features (average, energy are obtained from DWT for three color spaces (RGB, HSI, Lab. Feature selection is based on support vector machine- recursive feature elimination approach.

  8. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    Science.gov (United States)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  9. Military personnel recognition system using texture, colour, and SURF features

    Science.gov (United States)

    Irhebhude, Martins E.; Edirisinghe, Eran A.

    2014-06-01

    This paper presents an automatic, machine vision based, military personnel identification and classification system. Classification is done using a Support Vector Machine (SVM) on sets of Army, Air Force and Navy camouflage uniform personnel datasets. In the proposed system, the arm of service of personnel is recognised by the camouflage of a persons uniform, type of cap and the type of badge/logo. The detailed analysis done include; camouflage cap and plain cap differentiation using gray level co-occurrence matrix (GLCM) texture feature; classification on Army, Air Force and Navy camouflaged uniforms using GLCM texture and colour histogram bin features; plain cap badge classification into Army, Air Force and Navy using Speed Up Robust Feature (SURF). The proposed method recognised camouflage personnel arm of service on sets of data retrieved from google images and selected military websites. Correlation-based Feature Selection (CFS) was used to improve recognition and reduce dimensionality, thereby speeding the classification process. With this method success rates recorded during the analysis include 93.8% for camouflage appearance category, 100%, 90% and 100% rates of plain cap and camouflage cap categories for Army, Air Force and Navy categories, respectively. Accurate recognition was recorded using SURF for the plain cap badge category. Substantial analysis has been carried out and results prove that the proposed method can correctly classify military personnel into various arms of service. We show that the proposed method can be integrated into a face recognition system, which will recognise personnel in addition to determining the arm of service which the personnel belong. Such a system can be used to enhance the security of a military base or facility.

  10. Thermography based breast cancer detection using texture features and minimum variance quantization

    Science.gov (United States)

    Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar

    2014-01-01

    In this paper, we present a system based on feature extraction techniques and image segmentation techniques for detecting and diagnosing abnormal patterns in breast thermograms. The proposed system consists of three major steps: feature extraction, classification into normal and abnormal pattern and segmentation of abnormal pattern. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 GLCM features are extracted from thermograms. The ability of feature set in differentiating abnormal from normal tissue is investigated using a Support Vector Machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross validation method and Receiver operating characteristic analysis was performed. The verification results show that the proposed algorithm gives the best classification results using K-Nearest Neighbor classifier and a accuracy of 92.5%. Image segmentation techniques can play an important role to segment and extract suspected hot regions of interests in the breast infrared images. Three image segmentation techniques: minimum variance quantization, dilation of image and erosion of image are discussed. The hottest regions of thermal breast images are extracted and compared to the original images. According to the results, the proposed method has potential to extract almost exact shape of tumors. PMID:26417334

  11. Computerized lung cancer malignancy level analysis using 3D texture features

    Science.gov (United States)

    Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei

    2016-03-01

    Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.

  12. Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images

    Science.gov (United States)

    Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan

    2017-08-01

    Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.

  13. Statistical analysis of textural features for improved classification of oral histopathological images.

    Science.gov (United States)

    Muthu Rama Krishnan, M; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K

    2012-04-01

    The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.

  14. Analysis of breast lesions on contrast-enhanced magnetic resonance images using high-dimensional texture features

    Science.gov (United States)

    Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Wismueller, Axel

    2010-03-01

    Haralick texture features derived from gray-level co-occurrence matrices (GLCM) were used to classify the character of suspicious breast lesions as benign or malignant on dynamic contrast-enhanced MRI studies. Lesions were identified and annotated by an experienced radiologist on 54 MRI exams of female patients where histopathological reports were available prior to this investigation. GLCMs were then extracted from these 2D regions of interest (ROI) for four principal directions (0°, 45°, 90° & 135°) and used to compute Haralick texture features. A fuzzy k-nearest neighbor (k- NN) classifier was optimized in ten-fold cross-validation for each texture feature and the classification performance was calculated on an independent test set as a function of area under the ROC curve. The lesion ROIs were characterized by texture feature vectors containing the Haralick feature values computed from each directional-GLCM; and the classifier results obtained were compared to a previously used approach where the directional-GLCMs were summed to a nondirectional GLCM which could further yield a set of texture feature values. The impact of varying the inter-pixel distance while generating the GLCMs on the classifier's performance was also investigated. Classifier's AUC was found to significantly increase when the high-dimensional texture feature vector approach was pursued, and when features derived from GLCMs generated using different inter-pixel distances were incorporated into the classification task. These results indicate that lesion character classification accuracy could be improved by retaining the texture features derived from the different directional GLCMs rather than combining these to yield a set of scalar feature values instead.

  15. SOFT COMPUTING BASED MEDICAL IMAGE RETRIEVAL USING SHAPE AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    M. Mary Helta Daisy

    2014-01-01

    Full Text Available Image retrieval is a challenging and important research applications like digital libraries and medical image databases. Content-based image retrieval is useful in retrieving images from database based on the feature vector generated with the help of the image features. In this study, we present image retrieval based on the genetic algorithm. The shape feature and morphological based texture features are extracted images in the database and query image. Then generating chromosome based on the distance value obtained by the difference feature vector of images in the data base and the query image. In the selected chromosome the genetic operators like cross over and mutation are applied. After that the best chromosome selected and displays the most similar images to the query image. The retrieval performance of the method shows better retrieval result.

  16. Research on texture feature of RS image based on cloud model

    Science.gov (United States)

    Wang, Zuocheng; Xue, Lixia

    2008-10-01

    This paper presents a new method applied to texture feature representation in RS image based on cloud model. Aiming at the fuzziness and randomness of RS image, we introduce the cloud theory into RS image processing in a creative way. The digital characteristics of clouds well integrate the fuzziness and randomness of linguistic terms in a unified way and map the quantitative and qualitative concepts. We adopt texture multi-dimensions cloud to accomplish vagueness and randomness handling of texture feature in RS image. The method has two steps: 1) Correlativity analyzing of texture statistical parameters in Grey Level Co-occurrence Matrix (GLCM) and parameters fuzzification. GLCM can be used to representing the texture feature in many aspects perfectly. According to the expressive force of texture statistical parameters and by Correlativity analyzing of texture statistical parameters, we can abstract a few texture statistical parameters that can best represent the texture feature. By the fuzziness algorithm, the texture statistical parameters can be mapped to fuzzy cloud space. 2) Texture multi-dimensions cloud model constructing. Based on the abstracted texture statistical parameters and fuzziness cloud space, texture multi-dimensions cloud model can be constructed in micro-windows of image. According to the membership of texture statistical parameters, we can achieve the samples of cloud-drop. By backward cloud generator, the digital characteristics of texture multi-dimensions cloud model can be achieved and the Mathematical Expected Hyper Surface(MEHS) of multi-dimensions cloud of micro-windows can be constructed. At last, the weighted sum of the 3 digital characteristics of micro-window cloud model was proposed and used in texture representing in RS image. The method we develop is demonstrated by applying it to texture representing in many RS images, various performance studies testify that the method is both efficient and effective. It enriches the cloud

  17. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    Energy Technology Data Exchange (ETDEWEB)

    Krafft, S [The University of Texas MD Anderson Cancer Center, Houston, TX (United States); The University of Texas Graduate School of Biomedical Sciences, Houston, TX (United States); Briere, T; Court, L; Martel, M [The University of Texas MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  18. Employing wavelet-based texture features in ammunition classification

    Science.gov (United States)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  19. Feature extraction for speaker diarization

    OpenAIRE

    Negre Rabassa, Enric

    2016-01-01

    Se explorarán y compararán diferentes características de bajo y alto nivel para la diarización automática de locutores Feature extraction for speaker diarization using different databases Extracción de características para la diarización de locutores utilizando diferentes bases de datos Extracció de caracteristiques per a la diarització de locutors utilitzant diferents bases de dades

  20. Spectrum and Image Texture Features Analysis for Early Blight Disease Detection on Eggplant Leaves

    Directory of Open Access Journals (Sweden)

    Chuanqi Xie

    2016-05-01

    Full Text Available This study investigated both spectrum and texture features for detecting early blight disease on eggplant leaves. Hyperspectral images for healthy and diseased samples were acquired covering the wavelengths from 380 to 1023 nm. Four gray images were identified according to the effective wavelengths (408, 535, 624 and 703 nm. Hyperspectral images were then converted into RGB, HSV and HLS images. Finally, eight texture features (mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment and correlation based on gray level co-occurrence matrix (GLCM were extracted from gray images, RGB, HSV and HLS images, respectively. The dependent variables for healthy and diseased samples were set as 0 and 1. K-Nearest Neighbor (KNN and AdaBoost classification models were established for detecting healthy and infected samples. All models obtained good results with the classification rates (CRs over 88.46% in the testing sets. The results demonstrated that spectrum and texture features were effective for early blight disease detection on eggplant leaves.

  1. Feature extraction for the analysis of colon status from the endoscopic images

    Directory of Open Access Journals (Sweden)

    Krishnan Shankar M

    2003-04-01

    Full Text Available Abstract Background Extracting features from the colonoscopic images is essential for getting the features, which characterizes the properties of the colon. The features are employed in the computer-assisted diagnosis of colonoscopic images to assist the physician in detecting the colon status. Methods Endoscopic images contain rich texture and color information. Novel schemes are developed to extract new texture features from the texture spectra in the chromatic and achromatic domains, and color features for a selected region of interest from each color component histogram of the colonoscopic images. These features are reduced in size using Principal Component Analysis (PCA and are evaluated using Backpropagation Neural Network (BPNN. Results Features extracted from endoscopic images were tested to classify the colon status as either normal or abnormal. The classification results obtained show the features' capability for classifying the colon's status. The average classification accuracy, which is using hybrid of the texture and color features with PCA (τ = 1%, is 97.72%. It is higher than the average classification accuracy using only texture (96.96%, τ = 1% or color (90.52%, τ = 1% features. Conclusion In conclusion, novel methods for extracting new texture- and color-based features from the colonoscopic images to classify the colon status have been proposed. A new approach using PCA in conjunction with BPNN for evaluating the features has also been proposed. The preliminary test results support the feasibility of the proposed method.

  2. A Hybrid method of face detection based on Feature Extraction using PIFR and Feature Optimization using TLBO

    Directory of Open Access Journals (Sweden)

    Kapil Verma

    2016-01-01

    Full Text Available In this paper we proposed a face detection method based on feature selection and feature optimization. Now in current research trend of biometric security used the process of feature optimization for better improvement of face detection technique. Basically our face consists of three types of feature such as skin color, texture and shape and size of face. The most important feature of face is skin color and texture of face. In this detection technique used texture feature of face image. For the texture extraction of image face used partial feature extraction function, these function is most promising shape feature analysis. For the selection of feature and optimization of feature used multi-objective TLBO. TLBO algorithm is population based searching technique and defines two constraints function for the process of selection and optimization. The proposed algorithm of face detection based on feature selection and feature optimization process. Initially used face image data base and passes through partial feature extractor function and these transform function gives a texture feature of face image. For the evaluation of performance our proposed algorithm implemented in MATLAB 7.8.0 software and face image used provided by Google face image database. For numerical analysis of result used hit and miss ratio. Our empirical evaluation of result shows better prediction result in compression of PIFR method of face detection.

  3. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach.

    Science.gov (United States)

    Irshad, Humayun; Jalali, Sepehr; Roux, Ludovic; Racoceanu, Daniel; Hwee, Lim Joo; Naour, Gilles Le; Capron, Frédérique

    2013-01-01

    According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. The aim is to investigate the various texture features and Hierarchical Model and X (HMAX) biologically inspired approach for mitosis detection using machine-learning techniques. We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT) features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM), and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for an International Conference on Pattern Recognition (ICPR) 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and classification rate.

  4. Shape and texture based novel features for automated juxtapleural nodule detection in lung CTs.

    Science.gov (United States)

    Taşcı, Erdal; Uğur, Aybars

    2015-05-01

    Lung cancer is one of the types of cancer with highest mortality rate in the world. In case of early detection and diagnosis, the survival rate of patients significantly increases. In this study, a novel method and system that provides automatic detection of juxtapleural nodule pattern have been developed from cross-sectional images of lung CT (Computerized Tomography). Shape-based and both shape and texture based 7 features are contributed to the literature for lung nodules. System that we developed consists of six main stages called preprocessing, lung segmentation, detection of nodule candidate regions, feature extraction, feature selection (with five feature ranking criteria) and classification. LIDC dataset containing cross-sectional images of lung CT has been utilized, 1410 nodule candidate regions and 40 features have been extracted from 138 cross-sectional images for 24 patients. Experimental results for 10 classifiers are obtained and presented. Adding our derived features to known 33 features has increased nodule recognition performance from 0.9639 to 0.9679 AUC value on generalized linear model regression (GLMR) for 22 selected features and being reached one of the most successful results in the literature.

  5. Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2008-01-01

    Full Text Available The objective of developing this software is to achieve auto-segmentation and tissue characterization. Therefore, the present algorithm has been designed and developed for analysis of medical images based on hybridization of syntactic and statistical approaches, using artificial neural network (ANN. This algorithm performs segmentation and classification as is done in human vision system, which recognizes objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. The analysis of medical image is directly based on four steps: 1 image filtering, 2 segmentation, 3 feature extraction, and 4 analysis of extracted features by pattern recognition system or classifier. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features with ANN as segmentation and classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semisupervised approach in which supervision is involved only at the level of defining texture-primitive cell; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. The algorithm was first tested on Markov textures, and the success rate achieved in classification was 100%; further, the algorithm was able to give results on the test images impregnated with distorted Markov texture cell. In addition to this, the output also indicated the level of distortion in distorted Markov texture cell as compared to standard Markov texture cell. Finally, algorithm was applied to selected medical images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated.

  6. Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network.

    Science.gov (United States)

    Sharma, Neeraj; Ray, Amit K; Sharma, Shiru; Shukla, K K; Pradhan, Satyajit; Aggarwal, Lalit M

    2008-07-01

    The objective of developing this software is to achieve auto-segmentation and tissue characterization. Therefore, the present algorithm has been designed and developed for analysis of medical images based on hybridization of syntactic and statistical approaches, using artificial neural network (ANN). This algorithm performs segmentation and classification as is done in human vision system, which recognizes objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. The analysis of medical image is directly based on four steps: 1) image filtering, 2) segmentation, 3) feature extraction, and 4) analysis of extracted features by pattern recognition system or classifier. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features with ANN as segmentation and classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semisupervised approach in which supervision is involved only at the level of defining texture-primitive cell; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. The algorithm was first tested on Markov textures, and the success rate achieved in classification was 100%; further, the algorithm was able to give results on the test images impregnated with distorted Markov texture cell. In addition to this, the output also indicated the level of distortion in distorted Markov texture cell as compared to standard Markov texture cell. Finally, algorithm was applied to selected medical images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated.

  7. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Smoke detection is a very key part of fire recognition in a forest fire surveillance video since the smoke produced by forest fires is visible much before the flames. The performance of smoke video detection algorithm is often influenced by some smoke-like objects such as heavy fog. This paper presents a novel forest fire smoke video detection based on spatiotemporal features and dynamic texture features. At first, Kalman filtering is used to segment candidate smoke regions. Then, candidate smoke region is divided into small blocks. Spatiotemporal energy feature of each block is extracted by computing the energy features of its 8-neighboring blocks in the current frame and its two adjacent frames. Flutter direction angle is computed by analyzing the centroid motion of the segmented regions in one candidate smoke video clip. Local Binary Motion Pattern (LBMP is used to define dynamic texture features of smoke videos. Finally, smoke video is recognized by Adaboost algorithm. The experimental results show that the proposed method can effectively detect smoke image recorded from different scenes.

  8. SEGMENTATION OF POLARIMETRIC SAR IMAGES USIG WAVELET TRANSFORMATION AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    A. Rezaeian

    2015-12-01

    Full Text Available Polarimetric Synthetic Aperture Radar (PolSAR sensors can collect useful observations from earth’s surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT. Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  9. Machine vision: an incremental learning system based on features derived using fast Gabor transforms for the identification of textural objects

    Science.gov (United States)

    Clark, Richard M.; Adjei, Osei; Johal, Harpal

    2001-11-01

    This paper proposes a fast, effective and also very adaptable incremental learning system for identifying textures based on features extracted from Gabor space. The Gabor transform is a useful technique for feature extraction since it exhibits properties that are similar to biologically visual sensory systems such as those found in the mammalian visual cortex. Although two-dimensional Gabor filters have been applied successfully to a variety of tasks such as text segmentation, object detection and fingerprint analysis, the work of this paper extends previous work by incorporating incremental learning to facilitate easier training. The proposed system transforms textural images into Gabor space and a non-linear threshold function is then applied to extract feature vectors that bear signatures of the textural images. The mean and variance of each training group is computed followed by a technique that uses the Kohonen network to cluster these features. The centers of these clusters form the basis of an incremental learning paradigm that allows new information to be integrated into the existing knowledge. A number of experiments are conducted for real-time identification or discrimination of textural images.

  10. Comparing Shape and Texture Features for Pattern Recognition in Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Newsam, S; Kamath, C

    2004-12-10

    Shape and texture features have been used for some time for pattern recognition in datasets such as remote sensed imagery, medical imagery, photographs, etc. In this paper, we investigate shape and texture features for pattern recognition in simulation data. In particular, we explore which features are suitable for characterizing regions of interest in images resulting from fluid mixing simulations. Three texture features--gray level co-occurrence matrices, wavelets, and Gabor filters--and two shape features--geometric moments and the angular radial transform--are compared. The features are evaluated using a similarity retrieval framework. Our preliminary results indicate that Gabor filters perform the best among the texture features and the angular radial transform performs the best among the shape features. The feature which performs the best overall is dependent on how the groundtruth dataset is created.

  11. A New Method of Semantic Feature Extraction for Medical Images Data

    Institute of Scientific and Technical Information of China (English)

    XIE Conghua; SONG Yuqing; CHANG Jinyi

    2006-01-01

    In order to overcome the disadvantages of color, shape and texture-based features definition for medical images, this paper defines a new kind of semantic feature and its extraction algorithm. We firstly use kernel density estimation statistical model to describe the complicated medical image data, secondly, define some typical representative pixels of images as feature and finally, take hill-climbing strategy of Artificial Intelligence to extract those semantic features. Results of a content-based medial image retrieve system show that our semantic features have better distinguishing ability than those color, shape and texture-based features and can improve the ratios of recall and precision of this system smartly.

  12. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    DEFF Research Database (Denmark)

    Galavis, P.E.; Hollensen, Christian; Jallow, N.

    2010-01-01

    reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were...... classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small...

  13. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  14. An Image Retrieval Method Based on Color and Texture Features

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The technique of image retrieval is widely used in science experiment, military affairs, public security,advertisement, family entertainment, library and so on. The existing algorithms are mostly based on the characteristics of color, texture, shape and space relationship. This paper introduced an image retrieval algorithm, which is based on the matching of weighted EMD(Earth Mover's Distance) distance and texture distance. EMD distance is the distance between the histograms of two images in HSV(Hue, Saturation, Value) color space, and texture distance is the L1 distance between the texture spectra of two images. The experimental results show that the retrieval rate can be increased obviously by using the proposed algorithm.

  15. Benign-malignant mass classification in mammogram using edge weighted local texture features

    Science.gov (United States)

    Rabidas, Rinku; Midya, Abhishek; Sadhu, Anup; Chakraborty, Jayasree

    2016-03-01

    This paper introduces novel Discriminative Robust Local Binary Pattern (DRLBP) and Discriminative Robust Local Ternary Pattern (DRLTP) for the classification of mammographic masses as benign or malignant. Mass is one of the common, however, challenging evidence of breast cancer in mammography and diagnosis of masses is a difficult task. Since DRLBP and DRLTP overcome the drawbacks of Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) by discriminating a brighter object against the dark background and vice-versa, in addition to the preservation of the edge information along with the texture information, several edge-preserving texture features are extracted, in this study, from DRLBP and DRLTP. Finally, a Fisher Linear Discriminant Analysis method is incorporated with discriminating features, selected by stepwise logistic regression method, for the classification of benign and malignant masses. The performance characteristics of DRLBP and DRLTP features are evaluated using a ten-fold cross-validation technique with 58 masses from the mini-MIAS database, and the best result is observed with DRLBP having an area under the receiver operating characteristic curve of 0.982.

  16. Exploiting quality and texture features to estimate age and gender from fingerprints

    Science.gov (United States)

    Marasco, Emanuela; Lugini, Luca; Cukic, Bojan

    2014-05-01

    Age and gender of an individual, when available, can contribute to identification decisions provided by primary biometrics and help improve matching performance. In this paper, we propose a system which automatically infers age and gender from the fingerprint image. Current approaches for predicting age and gender generally exploit features such as ridge count, and white lines count that are manually extracted. Existing automated approaches have significant limitations in accuracy especially when dealing with data pertaining to elderly females. The model proposed in this paper exploits image quality features synthesized from 40 different frequency bands, and image texture properties captured using the Local Binary Pattern (LBP) and the Local Phase Quantization (LPQ) operators. We evaluate the performance of the proposed approach using fingerprint images collected from 500 users with an optical sensor. The approach achieves prediction accuracy of 89.1% for age and 88.7% for gender.

  17. Wavelet-SVM classifier based on texture features for land cover classification

    Science.gov (United States)

    Zhang, Ning; Wu, Bingfang; Zhu, Jianjun; Zhou, Yuemin; Zhu, Liang

    2008-12-01

    Texture features are recognized to be a special hint in images, which represent the spatial relations of the gray pixels. Nowadays, the applications of the texture analysis in image classification spread abroad. Combined with wavelet multi-resolution analysis or support vector machine statistical learning theory, texture analysis could improve the quality of classification increasingly. In this paper, we focus on the land cover for the Three Gorges reservoir using remote sensing data SPOT-5, a new classification method, wavelet-SVM classifier based on texture features, is employed for this study. Compare to the traditional maximum likelihood classifier and SVM classifier only use spectrum feature, this method produces more accurate classification results. According to the real environment of the Three Gorges reservoir land cover, a best texture group is selected from several texture features. Decompose the image at different levels, which is one of the main advantage of wavelet, and then compute the texture features in every sub-image, and the next step is eliminating the redundant, every texture features are centralized on the first principal components using principal component analysis. Finally, with the first principal components inputted, we can get the classification result using SVM in every decomposition scale, but what the problem we couldn't overlook is how to select the best SVM parameters. So an iterative rule based on the classification accuracy is induced, the more accuracy, the proper parameters.

  18. Importance of the texture features in a query from a spectral image database

    Science.gov (United States)

    Kohonen, Oili; Hauta-Kasari, Markku

    2006-01-01

    A new, semantically meaningful technique for querying the images from a spectral image database is proposed. The technique is based on the use of both color- and texture features. The color features are calculated from spectral images by using the Self-Organizing Map (SOM) when methods of Gray Level Co-occurrence Matrix (GLCM) and Local Binary Pattern (LBP) are used for constructing the texture features. The importance of texture features in a querying is seen in experimental results, which are given by using a real spectral image database. Also the differences between the results gained by the use of co-occurrence matrix and LBP are introduced.

  19. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  20. Color and Texture Feature for Remote Sensing - Image Retrieval System: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Retno Kusumaningrum

    2011-09-01

    Full Text Available In this study, we proposed score fusion technique to improve the performance of remote sensing image retrieval system (RS-IRS using combination of several features. The representation of each feature is selected based on their performance when used as single feature in RS-IRS. Those features are color moment using L*a*b* color space, edge direction histogram extracted from Saturation channel, GLCM and Gabor Wavelet represented using standard deviation, and local binary pattern using 8-neighborhood. The score fusion is performed by computing the value of image similarity between an image in the database and query, where the image similarity value is sum of all features similarity, where each of feature similarity has been divided by SVD value of feature similarity between all images in the database and query from related feature. The feature similarity is measured by histogram intersection for local binary pattern, whereas the color moment, edge direction histogram, GLCM, and Gabor are measured by Euclidean Distance. The final result shows that the best performance of remote sensing image retrieval in this study is a system which uses the combination of color and texture features (i.e. color moment, edge direction histogram, GLCM, Gabor wavelet, and local binary pattern and uses score fusion in measuring the image similarity between query and images in the database. This system outperforms the other five individual feature with average precision rates 3%, 20%, 13%, 11%, and 9%, respectively, for color moment, edge direction histogram, GLCM, Gabor wavelet, and LBP. Moreover, this system also increase 17% compared to system without score fusion, simple-sum technique.

  1. Breast tissue classification in digital tomosynthesis images based on global gradient minimization and texture features

    Science.gov (United States)

    Qin, Xulei; Lu, Guolan; Sechopoulos, Ioannis; Fei, Baowei

    2014-03-01

    Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.

  2. A Novel Feature Extraction Scheme for Medical X-Ray Images

    OpenAIRE

    Prachi.G.Bhende; Dr.A.N.Cheeran

    2016-01-01

    X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform reliable matching between different views of an obje...

  3. Rapid Feature Extraction for Optical Character Recognition

    CERN Document Server

    Hossain, M Zahid; Yan, Hong

    2012-01-01

    Feature extraction is one of the fundamental problems of character recognition. The performance of character recognition system is depends on proper feature extraction and correct classifier selection. In this article, a rapid feature extraction method is proposed and named as Celled Projection (CP) that compute the projection of each section formed through partitioning an image. The recognition performance of the proposed method is compared with other widely used feature extraction methods that are intensively studied for many different scripts in literature. The experiments have been conducted using Bangla handwritten numerals along with three different well known classifiers which demonstrate comparable results including 94.12% recognition accuracy using celled projection.

  4. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  5. A cosmic microwave background feature consistent with a cosmic texture.

    Science.gov (United States)

    Cruz, M; Turok, N; Vielva, P; Martínez-González, E; Hobson, M

    2007-12-07

    The Cosmic Microwave Background provides our most ancient image of the universe and our best tool for studying its early evolution. Theories of high-energy physics predict the formation of various types of topological defects in the very early universe, including cosmic texture, which would generate hot and cold spots in the Cosmic Microwave Background. We show through a Bayesian statistical analysis that the most prominent 5 degrees -radius cold spot observed in all-sky images, which is otherwise hard to explain, is compatible with having being caused by a texture. From this model, we constrain the fundamental symmetry-breaking energy scale to be (0) approximately 8.7 x 10(15) gigaelectron volts. If confirmed, this detection of a cosmic defect will probe physics at energies exceeding any conceivable terrestrial experiment.

  6. A new procedure for characterizing textured surfaces with a deterministic pattern of valley features

    DEFF Research Database (Denmark)

    Godi, Alessandro; Kühle, A; De Chiffre, Leonardo

    2013-01-01

    In recent years there has been the development of a high number of manufacturing methods for creating textured surfaces which often present deterministic patterns of valley features. Unfortunately, suitable methodologies for characterizing them are lacking. Existing standards cannot in fact...

  7. A Cosmic Microwave Background feature consistent with a cosmic texture

    OpenAIRE

    Cruz, M.; Turok, N.; Vielva, P.; Martinez-Gonzalez, E.; Hobson, M.

    2007-01-01

    The Cosmic Microwave Background provides our most ancient image of the Universe and our best tool for studying its early evolution. Theories of high energy physics predict the formation of various types of topological defects in the very early universe, including cosmic texture which would generate hot and cold spots in the Cosmic Microwave Background. We show through a Bayesian statistical analysis that the most prominent, 5 degree radius cold spot observed in all-sky images, which is otherw...

  8. STUDY ON THE TECHNIQUE TO DETECT TEXTURE FEATURES IN SAR IMAGES

    Institute of Scientific and Technical Information of China (English)

    Fu Yusheng; Ding Dongtao; Hou Yinming

    2004-01-01

    This letter studies on the detection of texture features in Synthetic Aperture Radar (SAR) images. Through analyzing the feature detection method proposed by Lopes, an improved texture detection method is proposed, which can not only detect the edge and lines but also avoid stretching edge and suppressing lines of the former algorithm. Experimental results with both simulated and real SAR images verify the advantage and practicability of the improved method.

  9. A NOVEL WRAPPING CURVELET TRANSFORMATION BASED ANGULAR TEXTURE PATTERN (WCTATP EXTRACTION METHOD FOR WEED IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    D. Ashok Kumar

    2016-02-01

    Full Text Available Apparently weed is a major menace in crop production as it competes with crop for nutrients, moisture, space and light which resulting in poor growth and development of the crop and finally yield. Yield loss accounts for even more than 70% when crops are frown under unweeded condition with severe weed infestation. Weed management is the most significant process in the agricultural applications to improve the crop productivity rate and reduce the herbicide application cost. Existing weed detection techniques does not yield better performance due to the complex background, illumination variation and crop and weed overlapping in the agricultural field image. Hence, there arises a need for the development of effective weed identification technique. To overcome this drawback, this paper proposes a novel Wrapping Curvelet Transformation Based Angular Texture Pattern Extraction Method (WCTATP for weed identification. In our proposed work, Global Histogram Equalization (GHE is used improve the quality of the image and Adaptive Median Filter (AMF is used for filtering the impulse noise from the image. Plant image identification is performed using green pixel extraction and k-means clustering. Wrapping Curvelet transform is applied to the plant image. Feature extraction is performed to extract the angular texture pattern of the plant image. Particle Swarm Optimization (PSO based Differential Evolution Feature Selection (DEFS approach is applied to select the optimal features. Then, the selected features are learned and passed through an RVM based classifier to find out the weed. Edge detection and contouring is performed to identify the weed in the plant image. The Fuzzy rule-based approach is applied to detect the low, medium and high levels of the weed patchiness. From the experimental results, it is clearly observed that the accuracy of the proposed approach is higher than the existing Support Vector Machine (SVM based approaches. The proposed approach

  10. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  11. Prognosis classification in glioblastoma multiforme using multimodal MRI derived heterogeneity textural features: impact of pre-processing choices

    Science.gov (United States)

    Upadhaya, Taman; Morvan, Yannick; Stindel, Eric; Le Reste, Pierre-Jean; Hatt, Mathieu

    2016-03-01

    Heterogeneity image-derived features of Glioblastoma multiforme (GBM) tumors from multimodal MRI sequences may provide higher prognostic value than standard parameters used in routine clinical practice. We previously developed a framework for automatic extraction and combination of image-derived features (also called "Radiomics") through support vector machines (SVM) for predictive model building. The results we obtained in a cohort of 40 GBM suggested these features could be used to identify patients with poorer outcome. However, extraction of these features is a delicate multi-step process and their values may therefore depend on the pre-processing of images. The original developed workflow included skull removal, bias homogeneity correction, and multimodal tumor segmentation, followed by textural features computation, and lastly ranking, selection and combination through a SVM-based classifier. The goal of the present work was to specifically investigate the potential benefit and respective impact of the addition of several MRI pre-processing steps (spatial resampling for isotropic voxels, intensities quantization and normalization) before textural features computation, on the resulting accuracy of the classifier. Eighteen patients datasets were also added for the present work (58 patients in total). A classification accuracy of 83% (sensitivity 79%, specificity 85%) was obtained using the original framework. The addition of the new pre-processing steps increased it to 93% (sensitivity 93%, specificity 93%) in identifying patients with poorer survival (below the median of 12 months). Among the three considered pre-processing steps, spatial resampling was found to have the most important impact. This shows the crucial importance of investigating appropriate image pre-processing steps to be used for methodologies based on textural features extraction in medical imaging.

  12. Analysis of mammogram images based on texture features of curvelet sub-bands

    Science.gov (United States)

    Gardezi, Syed Jamal Safdar; Faye, Ibrahima; Eltoukhy, Mohamed Meselhy

    2014-01-01

    Image texture analysis plays an important role in object detection and recognition in image processing. The texture analysis can be used for early detection of breast cancer by classifying the mammogram images into normal and abnormal classes. This study investigates breast cancer detection using texture features obtained from the grey level cooccurrence matrices (GLCM) of curvelet sub-band levels combined with texture feature obtained from the image itself. The GLCM were constructed for each sub-band of three curvelet decomposition levels. The obtained feature vector presented to the classifier to differentiate between normal and abnormal tissues. The proposed method is applied over 305 region of interest (ROI) cropped from MIAS dataset. The simple logistic classifier achieved 86.66% classification accuracy rate with sensitivity 76.53% and specificity 91.3%.

  13. A Fast Image Retrieval Algorithm with Multi-Channel Textural Features in PACS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Dong; YANG Yan; QIN Qian-qing

    2005-01-01

    The paper presents a fast algorithm for image retrieval using multi-channel textural features in medical picture archiving and communication system (PACS). By choosing different linear or nonlinear operators in prediction and update lifting step, the linear or nonlinear M-band wavelet decomposition can be achieved in Mband lifting. It provides the advantages such as fast transform, in-place calculation and integer-integer transform. The set of wavelet moment forms multi-channel textural feature vector related to the texture distribution of each wavelet images. The experimental results of CT image database show that the retrieval approach of multi-channel textural features is effective for image indexing and has lower computational complexity and less memory. It is much easier to implement in hardware and suitable for the applications of real time medical processing system.

  14. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiong; He Gui-ming; Zhang Yun

    2003-01-01

    In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  15. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang; Xiong; He; Gui-Ming; 等

    2003-01-01

    In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  16. Extraction and assessment of chatter feature

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Presents feature wavelet packets(FWP)a new method of chatter feature extraction in milling process based on wavelet packets transform(WPF)and using vibration signal.Studies the procedure of automatic feature selection for a given process.Establishes an exponential autoregressive(EAR)model to extract limit cycle behavior of chatter since chatter is a nonlinear oscillation with limit cycle.And gives a way to determine FWTsnumber,and experimental data to assess the effectiveness of the WPT feature extraction by unforced response of EAR model of reconstructed signal.

  17. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  18. The Prognostic Value of Adaptive Nuclear Texture Features from Patient Gray Level Entropy Matrices in Early Stage Ovarian Cancer

    Directory of Open Access Journals (Sweden)

    Birgitte Nielsen

    2012-01-01

    Full Text Available Background: Nuclear texture analysis gives information about the spatial arrangement of the pixel gray levels in a digitized microscopic nuclear image, providing texture features that may be used as quantitative tools for prognosis of human cancer. The aim of the study was to evaluate the prognostic value of adaptive nuclear texture features in early stage ovarian cancer.

  19. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    Energy Technology Data Exchange (ETDEWEB)

    Galavis, Paulina E.; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert (Dept. of Medical Physics, Univ. of Wisconsin, Madison, WI (United States)), E-mail: galavis@wisc.edu; Hollensen, Christian (Dept. of Informatics and Mathematical Models, Technical Univ. of Denmark, Copenhagen (Denmark))

    2010-10-15

    Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range = 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% = range = 25%) were entropy-GLCM, sum entropy, high gray level run emphasis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be

  20. Diffusion-weighted imaging of the abdomen: Impact of b-values on texture analysis features.

    Science.gov (United States)

    Becker, Anton S; Wagner, Matthias W; Wurnig, Moritz C; Boss, Andreas

    2017-01-01

    The purpose of this work was to systematically assess the impact of the b-value on texture analysis in MR diffusion-weighted imaging (DWI) of the abdomen. In eight healthy male volunteers, echo-planar DWI sequences at 16 b-values ranging between 0 and 1000 s/mm(2) were acquired at 3 T. Three different apparent diffusion coefficient (ADC) maps were computed (0, 750/100, 390, 750 s/mm(2) /all b-values). Texture analysis of rectangular regions of interest in the liver, kidney, spleen, pancreas, paraspinal muscle and subcutaneous fat was performed on DW images and the ADC maps, applying 19 features computed from the histogram, grey-level co-occurrence matrix (GLCM) and grey-level run-length matrix (GLRLM). Correlations between b-values and texture features were tested with a linear and an exponential model; the best fit was determined by the smallest sum of squared residuals. Differences between the ADC maps were assessed with an analysis of variance. A Bonferroni-corrected p-value less than 0.008 (=0.05/6) was considered statistically significant. Most GLCM and GLRLM-derived texture features (12-18 per organ) showed significant correlations with the b-value. Four texture features correlated significantly with changing b-values in all organs (p GLCM features showed significant variability in the different ADC maps. Several texture features vary systematically in healthy tissues at different b-values, which needs to be taken into account if DWI data with different b-values are analyzed. Histogram and GLRLM-derived texture features are stable on ADC maps computed from different b-values.

  1. SU-F-R-45: The Prognostic Value of Radiotherapy Based On the Changes of Texture Features Between Pre-Treatment and Post-Treatment FDG PET Image for NSCLC Patients

    Energy Technology Data Exchange (ETDEWEB)

    Ma, C; Yin, Y [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China)

    2016-06-15

    Purpose: The purpose of this research is investigating which texture features extracted from FDG-PET images by gray-level co-occurrence matrix(GLCM) have a higher prognostic value than the other texture features. Methods: 21 non-small cell lung cancer(NSCLC) patients were approved in the study. Patients underwent 18F-FDG PET/CT scans with both pre-treatment and post-treatment. Firstly, the tumors were extracted by our house developed software. Secondly, the clinical features including the maximum SUV and tumor volume were extracted by MIM vista software, and texture features including angular second moment, contrast, inverse different moment, entropy and correlation were extracted using MATLAB.The differences can be calculated by using post-treatment features to subtract pre-treatment features. Finally, the SPSS software was used to get the Pearson correlation coefficients and Spearman rank correlation coefficients between the change ratios of texture features and change ratios of clinical features. Results: The Pearson and Spearman rank correlation coefficient between contrast and SUV maximum is 0.785 and 0.709. The P and S value between inverse difference moment and tumor volume is 0.953 and 0.942. Conclusion: This preliminary study showed that the relationships between different texture features and the same clinical feature are different. Finding the prognostic value of contrast and inverse difference moment were higher than the other three textures extracted by GLCM.

  2. SU-F-R-36: Validating Quantitative Radiomic Texture Features for Oncologic PET: A Digital Phantom Study

    Energy Technology Data Exchange (ETDEWEB)

    Yang, F; Yang, Y [University of Miami Miller School of Medicine, Miami, FL (United States); Young, L [University of Washington Medical Center, Seattle, WA (United States)

    2016-06-15

    Purpose: Radiomic texture features derived from the oncologic PET have recently been brought under intense investigation within the context of patient stratification and treatment outcome prediction in a variety of cancer types; however, their validity has not yet been examined. This work is aimed to validate radiomic PET texture metrics through the use of realistic simulations in the ground truth setting. Methods: Simulation of FDG-PET was conducted by applying the Zubal phantom as an attenuation map to the SimSET software package that employs Monte Carlo techniques to model the physical process of emission imaging. A total of 15 irregularly-shaped lesions featuring heterogeneous activity distribution were simulated. For each simulated lesion, 28 texture features in relation to the intensity histograms (GLIH), grey-level co-occurrence matrices (GLCOM), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated and compared with their respective values extracted from the ground truth activity map. Results: In reference to the values from the ground truth images, texture parameters appearing on the simulated data varied with a range of 0.73–3026.2% for GLIH-based, 0.02–100.1% for GLCOM-based, 1.11–173.8% for GLNDM-based, and 0.35–66.3% for GLZSM-based. For majority of the examined texture metrics (16/28), their values on the simulated data differed significantly from those from the ground truth images (P-value ranges from <0.0001 to 0.04). Features not exhibiting significant difference comprised of GLIH-based standard deviation, GLCO-based energy and entropy, GLND-based coarseness and contrast, and GLZS-based low gray-level zone emphasis, high gray-level zone emphasis, short zone low gray-level emphasis, long zone low gray-level emphasis, long zone high gray-level emphasis, and zone size nonuniformity. Conclusion: The extent to which PET imaging disturbs texture appearance is feature-dependent and could be substantial. It is thus

  3. 纹理分析在生物组织光学相干层析图像信息提取和特征识别中的应用%Texture Analysis for Information Extraction and Feature Recognition in Optical Coherence Tomography Images

    Institute of Scientific and Technical Information of China (English)

    梁艳梅; 张舒

    2011-01-01

    With the development of the optical coherence tomography (OCT) in the field of bio-medical imaging,computer-aided medical diagnosis and treatment effectiveness evaluation by means of the relevant tissue features reflected in the OCT image, has attracted much attention.Among methods aimed at the information extraction and feature recognition in OCT images, texture analysis has already been covered thoroughly and showed a good feasibility.In this paper, we concentrate on the characteristics and applications of various texture analysis methods,followed by the existing problems and possible solutions.%随着光学相干层析(OCT)技术在生物医学成像领域日趋广泛的应用,分析和提取OCT图像中所包含的生物组织信息、对相关特征加以识别,并最终应用于疾病的辅助诊断和诊疗效果的追踪,已经成为一个重要的研究方向.国内外研究者就此提出了多种不同的方法,其中纹理分析方法得到了最为充分的研究,显示出良好的实用性.对纹理分析在生物组织光学相干层析图像信息提取和特征识别中的应用进行了详尽地阐释,分析归纳了各种方法的特点和可能存在的问题.

  4. Linear classifier and textural analysis of optical scattering images for tumor classification during breast cancer extraction

    Science.gov (United States)

    Eguizabal, Alma; Laughney, Ashley M.; Garcia Allende, Pilar Beatriz; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.

    2013-02-01

    Texture analysis of light scattering in tissue is proposed to obtain diagnostic information from breast cancer specimens. Light scattering measurements are minimally invasive, and allow the estimation of tissue morphology to guide the surgeon in resection surgeries. The usability of scatter signatures acquired with a micro-sampling reflectance spectral imaging system was improved utilizing an empirical approximation to the Mie theory to estimate the scattering power on a per-pixel basis. Co-occurrence analysis is then applied to the scattering power images to extract the textural features. A statistical analysis of the features demonstrated the suitability of the autocorrelation for the classification of notmalignant (normal epithelia and stroma, benign epithelia and stroma, inflammation), malignant (DCIS, IDC, ILC) and adipose tissue, since it reveals morphological information of tissue. Non-malignant tissue shows higher autocorrelation values while adipose tissue presents a very low autocorrelation on its scatter texture, being malignant the middle ground. Consequently, a fast linear classifier based on the consideration of just one straightforward feature is enough for providing relevant diagnostic information. A leave-one-out validation of the linear classifier on 29 samples with 48 regions of interest showed classification accuracies of 98.74% on adipose tissue, 82.67% on non-malignant tissue and 72.37% on malignant tissue, in comparison with the biopsy H and E gold standard. This demonstrates that autocorrelation analysis of scatter signatures is a very computationally efficient and automated approach to provide pathological information in real-time to guide surgeon during tissue resection.

  5. Vein Texture Extraction Using the Multiscale Second-Order Differential Model

    Directory of Open Access Journals (Sweden)

    Xiong Xinyan

    2013-07-01

    Full Text Available In order to analyze the back of hand vein pattern rapidly and effectively, a novel approach based on multi-scale second-order differential model is proposed to extract the vein texture from vein samples directly, which is made up of two section: one is the foundation of local second-order differential model of vein texture(VLSDM, the other is texture extraction based on the multi-scale VLSDM. This paper analyzes the vein extraction using the multi-scale VLSDM and handles the filter response using the method of multi-scale analyzed noise filtered. This new algorithm has achieved good results for the vein texture, which is fuzzy, uneven distributed and cross-adhesion. Additionally this method keeps the original form of local shape and achieves orientation and scale information of the vein texture. The experiment result getting from this new method has also compared with another method and shown its outstanding performance.  

  6. An explorative childhood pneumonia analysis based on ultrasonic imaging texture features

    Science.gov (United States)

    Zenteno, Omar; Diaz, Kristians; Lavarello, Roberto; Zimic, Mirko; Correa, Malena; Mayta, Holger; Anticona, Cynthia; Pajuelo, Monica; Oberhelman, Richard; Checkley, William; Gilman, Robert H.; Figueroa, Dante; Castañeda, Benjamín.

    2015-12-01

    According to World Health Organization, pneumonia is the respiratory disease with the highest pediatric mortality rate accounting for 15% of all deaths of children under 5 years old worldwide. The diagnosis of pneumonia is commonly made by clinical criteria with support from ancillary studies and also laboratory findings. Chest imaging is commonly done with chest X-rays and occasionally with a chest CT scan. Lung ultrasound is a promising alternative for chest imaging; however, interpretation is subjective and requires adequate training. In the present work, a two-class classification algorithm based on four Gray-level co-occurrence matrix texture features (i.e., Contrast, Correlation, Energy and Homogeneity) extracted from lung ultrasound images from children aged between six months and five years is presented. Ultrasound data was collected using a L14-5/38 linear transducer. The data consisted of 22 positive- and 68 negative-diagnosed B-mode cine-loops selected by a medical expert and captured in the facilities of the Instituto Nacional de Salud del Niño (Lima, Peru), for a total number of 90 videos obtained from twelve children diagnosed with pneumonia. The classification capacity of each feature was explored independently and the optimal threshold was selected by a receiver operator characteristic (ROC) curve analysis. In addition, a principal component analysis was performed to evaluate the combined performance of all the features. Contrast and correlation resulted the two more significant features. The classification performance of these two features by principal components was evaluated. The results revealed 82% sensitivity, 76% specificity, 78% accuracy and 0.85 area under the ROC.

  7. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  8. ECG Feature Extraction Techniques - A Survey Approach

    CERN Document Server

    Karpagachelvi, S; Sivakumar, M

    2010-01-01

    ECG Feature Extraction plays a significant role in diagnosing most of the cardiac diseases. One cardiac cycle in an ECG signal consists of the P-QRS-T waves. This feature extraction scheme determines the amplitudes and intervals in the ECG signal for subsequent analysis. The amplitudes and intervals value of P-QRS-T segment determines the functioning of heart of every human. Recently, numerous research and techniques have been developed for analyzing the ECG signal. The proposed schemes were mostly based on Fuzzy Logic Methods, Artificial Neural Networks (ANN), Genetic Algorithm (GA), Support Vector Machines (SVM), and other Signal Analysis techniques. All these techniques and algorithms have their advantages and limitations. This proposed paper discusses various techniques and transformations proposed earlier in literature for extracting feature from an ECG signal. In addition this paper also provides a comparative study of various methods proposed by researchers in extracting the feature from ECG signal.

  9. A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities.

    Science.gov (United States)

    Vallières, M; Freeman, C R; Skamene, S R; El Naqa, I

    2015-07-21

    This study aims at developing a joint FDG-PET and MRI texture-based model for the early evaluation of lung metastasis risk in soft-tissue sarcomas (STSs). We investigate if the creation of new composite textures from the combination of FDG-PET and MR imaging information could better identify aggressive tumours. Towards this goal, a cohort of 51 patients with histologically proven STSs of the extremities was retrospectively evaluated. All patients had pre-treatment FDG-PET and MRI scans comprised of T1-weighted and T2-weighted fat-suppression sequences (T2FS). Nine non-texture features (SUV metrics and shape features) and forty-one texture features were extracted from the tumour region of separate (FDG-PET, T1 and T2FS) and fused (FDG-PET/T1 and FDG-PET/T2FS) scans. Volume fusion of the FDG-PET and MRI scans was implemented using the wavelet transform. The influence of six different extraction parameters on the predictive value of textures was investigated. The incorporation of features into multivariable models was performed using logistic regression. The multivariable modeling strategy involved imbalance-adjusted bootstrap resampling in the following four steps leading to final prediction model construction: (1) feature set reduction; (2) feature selection; (3) prediction performance estimation; and (4) computation of model coefficients. Univariate analysis showed that the isotropic voxel size at which texture features were extracted had the most impact on predictive value. In multivariable analysis, texture features extracted from fused scans significantly outperformed those from separate scans in terms of lung metastases prediction estimates. The best performance was obtained using a combination of four texture features extracted from FDG-PET/T1 and FDG-PET/T2FS scans. This model reached an area under the receiver-operating characteristic curve of 0.984 ± 0.002, a sensitivity of 0.955 ± 0.006, and a specificity of 0.926 ± 0.004 in bootstrapping

  10. COLOR FEATURE EXTRACTION FOR CBIR

    Directory of Open Access Journals (Sweden)

    Dr. H.B.KEKRE

    2011-12-01

    Full Text Available Content Based Image Retrieval is the application of computer vision techniques to the image retrieval problem of searching for digital images in large databases. The method of CBIR discussed in this paper can filter images based their content and would provide a better indexing and return more accurate results. In this paper we wouldbe discussing: Feature vector generation using color averaging technique, Similarity measures and Performance evaluation using randomly selected 5 query images per class out of which result of one class is discussed. Precision –Recall cross over plot is used as the performance evaluation measure to check the algorithm. As thesystem developed is generic, database consists of images from different classes. The effect due to the size of database and number of different classes is seen on the number of relevancy of the retrievals.

  11. Multi-Level Feature Descriptor for Robust Texture Classification via Locality-Constrained Collaborative Strategy

    CERN Document Server

    Kong, Shu

    2012-01-01

    This paper introduces a simple but highly efficient ensemble for robust texture classification, which can effectively deal with translation, scale and changes of significant viewpoint problems. The proposed method first inherits the spirit of spatial pyramid matching model (SPM), which is popular for encoding spatial distribution of local features, but in a flexible way, partitioning the original image into different levels and incorporating different overlapping patterns of each level. This flexible setup helps capture the informative features and produces sufficient local feature codes by some well-chosen aggregation statistics or pooling operations within each partitioned region, even when only a few sample images are available for training. Then each texture image is represented by several orderless feature codes and thereby all the training data form a reliable feature pond. Finally, to take full advantage of this feature pond, we develop a collaborative representation-based strategy with locality constr...

  12. Deblurring Texture Extraction from Digital Aerial Image by Reforming "Steep Edge" Curve

    Institute of Scientific and Technical Information of China (English)

    WU Jun; CHEN Danqing

    2005-01-01

    Texture extract from digital aerial image is widely used in three-dimensional city modeling to generate "photo-realistic" views. In this paper, a method based on reforming "Steep edge" curve, which clearly explains how the diffraction of the sunlight makes digital aerial image blurring, is proposed to deblur the texture extraction from digital aerial image, and the experiment shows a good result in visualization and automation.

  13. Oil spill information extraction combined with texture features from HJ-CCD Sensors-A case study in PL19-3 oil spill incident%辅以纹理特征的HJ-CCD海上溢油信息提取——以PL19-3溢油为例

    Institute of Scientific and Technical Information of China (English)

    李颖; 兰国新; 刘丙新

    2012-01-01

    Traditional information extraction technique of optical satellite remote sensing constitutes an important component of oil spills monitoring system, but it subjects to monitoring accuracy and ability dependence on spectral features. Based on CCD (30m spatial resolution) data from operational HJ instruments, we taking Penglai 19-3 oil spill incident was took as an example to discuss the method of combining spectrum with directional textural information to improve the accuracy of extracted information. A principal components-based algorithm first extracted all spectrum information of oil-on-water. Then a directional gradient algorithm acquired the edge distribution of oil-contaminated area. Finally, the proposal method were tested with 8 scenes of HJ-CCD data and compared with conventional method based on singe spectrum using Jefrries-Matusita separability index. The results show that the introduction of the directional texture analysis is effective to the edge detection of contaminated zone and the identification of thick-thin oil distribution, which is feasible in the oil spill monitoring based on HJ-CCD data.%鉴于依赖光谱特征的传统溢油信息提取方法面临信息提取精度低的困境,提出采用光谱特征与纹理分析结合的方法应用于溢油监测.选择位于渤海的蓬莱19-3油田溢油事故为研究对象,基于覆盖溢油事故阶段的8景30m分辨率HJ-CCD数据,在溢油目标提取过程中,引入了方向性纹理特征分析,将主成分光谱降维、方向梯度边缘检测等技术相结合,形成了基于光谱与纹理特征的溢油信息提取技术.所述方法经8组数据检验后,用类间分歧度方法进行了对比评价.结果表明:将纹理分析方法应用于溢油信息提取,类间分歧度提高到1.9999,提高了油膜影响边界和油膜厚度分区识别能力.

  14. Medical image retrieval based on texture and shape feature co-occurrence

    Science.gov (United States)

    Zhou, Yixiao; Huang, Yan; Ling, Haibin; Peng, Jingliang

    2012-03-01

    With the rapid development and wide application of medical imaging technology, explosive volumes of medical image data are produced every day all over the world. As such, it becomes increasingly challenging to manage and utilize such data effectively and efficiently. In particular, content-based medical image retrieval has been intensively researched in the past decade or so. In this work, we propose a novel approach to content-based medical image retrieval utilizing the co-occurrence of both the texture and the shape features in contrast to most previous algorithms that use purely the texture or the shape feature. Specifically, we propose a novel form of representation for the co-occurrence of the texture and the shape features in an image, i.e., the gray level and edge direction co-occurrence matrix (GLEDCOM). Based on GLEDCOM, we define eleven features forming a feature vector that is used to measure the similarity between images. As a result, it consistently yields outstanding performance on both images rich in texture (e.g., image of brain) and images with dominant smooth regions and sharp edges (e.g., image of bladder). As demonstrated by experiments, the mean precision of retrieval with GLEDCOM algorithm outperforms a set of representative algorithms including the gray level co-occurrence matrix (GLCM) based, the Hu's seven moment invariants (HSMI) based, the uniformity estimation method (UEM) based and the the modified Zernike moments (MZM) based algorithms by 10%-20%.

  15. Comparative Analysis of Feature Extraction Methods for the Classification of Prostate Cancer from TRUS Medical Images

    Directory of Open Access Journals (Sweden)

    Manavalan Radhakrishnan

    2012-01-01

    Full Text Available Diagnosing Prostate cancer is a challenging task for Urologists, Radiologists, and Oncologists. Ultrasound imaging is one of the hopeful techniques used for early detection of prostate cancer. The Region of interest (ROI is identified by different methods after preprocessing. In this paper, DBSCAN clustering with morphological operators is used to extort the prostate region. The evaluation of texture features is important for several image processing applications. The performance of the features extracted from the various texture methods such as histogram, Gray Level Cooccurrence Matrix (GLCM, Gray-Level Run-Length Matrix (GRLM, are analyzed separately. In this paper, it is proposed to combine histogram, GLRLM and GLCM in order to study the performance. The Support Vector Machine (SVM is adopted to classify the extracted features into benign or malignant. The performance of texture methods are evaluated using various statistical parameters such as sensitivity, specificity and accuracy. The comparative analysis has been performed over 5500 digitized TRUS images of prostate.

  16. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features.

    LENUS (Irish Health Repository)

    DiFranco, Matthew D

    2011-01-01

    We present a tile-based approach for producing clinically relevant probability maps of prostatic carcinoma in histological sections from radical prostatectomy. Our methodology incorporates ensemble learning for feature selection and classification on expert-annotated images. Random forest feature selection performed over varying training sets provides a subset of generalized CIEL*a*b* co-occurrence texture features, while sample selection strategies with minimal constraints reduce training data requirements to achieve reliable results. Ensembles of classifiers are built using expert-annotated tiles from training images, and scores for the probability of cancer presence are calculated from the responses of each classifier in the ensemble. Spatial filtering of tile-based texture features prior to classification results in increased heat-map coherence as well as AUC values of 95% using ensembles of either random forests or support vector machines. Our approach is designed for adaptation to different imaging modalities, image features, and histological decision domains.

  17. SAR Images Unsupervised Change Detection Based on Combination of Texture Feature Vector with Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    ZHUANG Huifu

    2016-03-01

    Full Text Available Generally, spatial-contextual information would be used in change detection because there is significant speckle noise in synthetic aperture radar(SAR images. In this paper, using the rich texture information of SAR images, an unsupervised change detection approach to high-resolution SAR images based on texture feature vector and maximum entropy principle is proposed. The difference image is generated by using the 32-dimensional texture feature vector of gray-level co-occurrence matrix(GLCM. And the automatic threshold is obtained by maximum entropy principle. In this method, the appropriate window size to change detection is 11×11 according to the regression analysis of window size and precision index. The experimental results show that the proposed approach is better could both reduce the influence of speckle noise and improve the detection accuracy of high-resolution SAR image effectively; and it is better than Markov random field.

  18. Linguistic feature analysis for protein interaction extraction

    Directory of Open Access Journals (Sweden)

    Cornelis Chris

    2009-11-01

    Full Text Available Abstract Background The rapid growth of the amount of publicly available reports on biomedical experimental results has recently caused a boost of text mining approaches for protein interaction extraction. Most approaches rely implicitly or explicitly on linguistic, i.e., lexical and syntactic, data extracted from text. However, only few attempts have been made to evaluate the contribution of the different feature types. In this work, we contribute to this evaluation by studying the relative importance of deep syntactic features, i.e., grammatical relations, shallow syntactic features (part-of-speech information and lexical features. For this purpose, we use a recently proposed approach that uses support vector machines with structured kernels. Results Our results reveal that the contribution of the different feature types varies for the different data sets on which the experiments were conducted. The smaller the training corpus compared to the test data, the more important the role of grammatical relations becomes. Moreover, deep syntactic information based classifiers prove to be more robust on heterogeneous texts where no or only limited common vocabulary is shared. Conclusion Our findings suggest that grammatical relations play an important role in the interaction extraction task. Moreover, the net advantage of adding lexical and shallow syntactic features is small related to the number of added features. This implies that efficient classifiers can be built by using only a small fraction of the features that are typically being used in recent approaches.

  19. Texture feature ranking with relevance learning to classify interstitial lung disease patterns

    NARCIS (Netherlands)

    Huber, Markus B.; Bunte, Kerstin; Nagarajan, Mahesh B.; Biehl, Michael; Ray, Lawrence A.; Wismueller, Axel

    2012-01-01

    Objective: The generalized matrix learning vector quantization (GMLVQ) is used to estimate the relevance of texture features in their ability to classify interstitial lung disease patterns in high-resolution computed tomography images. Methodology: After a stochastic gradient descent, the GMLVQ algo

  20. STATISTICAL PROBABILITY BASED ALGORITHM FOR EXTRACTING FEATURE POINTS IN 2-DIMENSIONAL IMAGE

    Institute of Scientific and Technical Information of China (English)

    Guan Yepeng; Gu Weikang; Ye Xiuqing; Liu Jilin

    2004-01-01

    An algorithm for automatically extracting feature points is developed after the area of feature points in 2-dimensional (2D) imagebeing located by probability theory, correlated methods and criterion for abnormity. Feature points in 2D image can be extracted only by calculating standard deviation of gray within sampled pixels area in our approach statically. While extracting feature points, the limitation to confirm threshold by tentative method according to some a priori information on processing image can be avoided. It is proved that the proposed algorithm is valid and reliable by extracting feature points on actual natural images with abundant and weak texture, including multi-object with complex background, respectively. It can meet the demand of extracting feature points of 2D image automatically in machine vision system.

  1. Identification of natural images and computer-generated graphics based on statistical and textural features.

    Science.gov (United States)

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  2. Motion feature extraction scheme for content-based video retrieval

    Science.gov (United States)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  3. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  4. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  5. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  6. Abnormality Segmentation and Classification of Brain MR Images using Combined Edge, Texture Region Features and Radial basics Function

    Directory of Open Access Journals (Sweden)

    B. Balakumar

    2013-09-01

    Full Text Available Magnetic Resonance Images (MRI are widely used in the diagnosis of Brain tumor. In this study we have developed a new approach for automatic classification of the normal and abnormal non-enhanced MRI images. The proposed method consists of four stages namely Preprocessing, feature extraction, feature reduction and classification. In the first stage anisotropic filter is applied for noise reduction and to make the image suitable for extracting the features. In the second stage, Region growing base segmentation is used for partitioning the image into meaningful regions. In the third stage, combined edge and Texture based features are extracted using Histogram and Gray Level Co-occurrence Matrix (GLCM from the segmented image. In the next stage PCA is used to reduce the dimensionality of the Feature space which results in a more efficient and accurate classification. Finally, in the classification stage, a supervised Radial Basics Function (RBF classifier is used to classify the experimental images into normal and abnormal. The obtained experimental are evaluated using the metrics sensitivity, specificity and accuracy. For comparison, the performance of the proposed technique has significantly improved the tumor detection accuracy with other neural network based classifier SVM, FFNN and FSVM.

  7. Edge-Based Feature Extraction Method and Its Application to Image Retrieval

    Directory of Open Access Journals (Sweden)

    G. Ohashi

    2003-10-01

    Full Text Available We propose a novel feature extraction method for content-bases image retrieval using graphical rough sketches. The proposed method extracts features based on the shape and texture of objects. This edge-based feature extraction method functions by representing the relative positional relationship between edge pixels, and has the advantage of being shift-, scale-, and rotation-invariant. In order to verify its effectiveness, we applied the proposed method to 1,650 images obtained from the Hamamatsu-city Museum of Musical Instruments and 5,500 images obtained from Corel Photo Gallery. The results verified that the proposed method is an effective tool for achieving accurate retrieval.

  8. Medical Image Feature, Extraction, Selection And Classification

    Directory of Open Access Journals (Sweden)

    M.VASANTHA,

    2010-06-01

    Full Text Available Breast cancer is the most common type of cancer found in women. It is the most frequent form of cancer and one in 22 women in India is likely to suffer from breast cancer. This paper proposes a image classifier to classify the mammogram images. Mammogram image is classified into normal image, benign image and malignant image. Totally 26 features including histogram intensity features and GLCM features are extracted from mammogram image. A hybrid approach of feature selection is proposed in this paper which reduces 75% of the features. Decision tree algorithms are applied to mammography lassification by using these reduced features. Experimental results have been obtained for a data set of 113 images taken from MIAS of different types. This technique of classification has not been attempted before and it reveals the potential of Data mining in medical treatment.

  9. Extraction of essential features by quantum density

    Science.gov (United States)

    Wilinski, Artur

    2016-09-01

    In this paper we consider the problem of feature extraction, as an essential and important search of dataset. This problem describe the real ownership of the signals and images. Searches features are often difficult to identify because of data complexity and their redundancy. Here is shown a method of finding an essential features groups, according to the defined issues. To find the hidden attributes we use a special algorithm DQAL with the quantum density for thej-th features from original data, that indicates the important set of attributes. Finally, they have been generated small sets of attributes for subsets with different properties of features. They can be used to the construction of a small set of essential features. All figures were made in Matlab6.

  10. Classification of Infrared Monitor Images of Coal Using an Feature Texture Statistics and Improved BP Network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It is very important to accurately recognize and locate pulverized and block coal seen in a coal mine's infrared image monitoring system.Infrared monitor images of pulverized and block coal were sampled in the roadway of a coal mine.Texture statistics from the grey level dependence matrix were selected as the criterion for classification.The distributions of the texture statistics were calculated and analysed.A normalizing function was added to the front end of the BP network with one hidden layer.An additional classification layer is joined behind the linear layer.The recognition of pulverized from block coal images was tested using the improved BP network.The results of the experiment show that texture variables from the grey level dependence matrix can act as recognizable features of the image.The innovative improved BP network can then recognize the pulverized and block coal images.

  11. Extracting Semantically Annotated 3d Building Models with Textures from Oblique Aerial Imagery

    Science.gov (United States)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.; Poznanska, A.

    2015-03-01

    This paper proposes a method for the reconstruction of city buildings with automatically derived textures that can be directly used for façade element classification. Oblique and nadir aerial imagery recorded by a multi-head camera system is transformed into dense 3D point clouds and evaluated statistically in order to extract the hull of the structures. For the resulting wall, roof and ground surfaces high-resolution polygonal texture patches are calculated and compactly arranged in a texture atlas without resampling. The façade textures subsequently get analyzed by a commercial software package to detect possible windows whose contours are projected into the original oriented source images and sparsely ray-casted to obtain their 3D world coordinates. With the windows being reintegrated into the previously extracted hull the final building models are stored as semantically annotated CityGML "LOD-2.5" objects.

  12. Extracting Product Features from Chinese Product Reviews

    Directory of Open Access Journals (Sweden)

    Yahui Xi

    2013-12-01

    Full Text Available With the great development of e-commerce, the number of product reviews grows rapidly on the e-commerce websites. Review mining has recently received a lot of attention, which aims to discover the valuable information from the massive product reviews. Product feature extraction is one of the basic tasks of product review mining. Its effectiveness can influence significantly the performance of subsequent jobs. Double Propagation is a state-of-the-art technique in product feature extraction. In this paper, we apply the Double Propagation to the product feature exaction from Chinese product reviews and adopt some techniques to improve the precision and recall. First, indirect relations and verb product features are introduced to increase the recall. Second, when ranking candidate product features by using HITS, we expand the number of hubs by means of the dependency relation patterns between product features and opinion words to improve the precision. Finally, the Normalized Pattern Relevance is employed to filter the exacted product features. Experiments on diverse real-life datasets show promising results

  13. Retrieval Using Texture Features in High Resolution Multi-spectral Satellite Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Newsam, S D; Kamath, C

    2004-01-22

    Texture features have long been used in remote sensing applications to represent and retrieve image regions similar to a query region. Various representations of texture have been proposed based on the Fourier power spectrum, spatial co-occurrence, wavelets, Gabor filters, etc. These representations vary in their computational complexity and their suitability for representing different region types. Much of the work done thus far has focused on panchromatic imagery at low to moderate spatial resolutions, such as images from Landsat 1-7 which have a resolution of 15-30 m/pixel, and from SPOT 1-5 which have a resolution of 2.5-20 m/pixel. However, it is not clear which texture representation works best for the new classes of high resolution panchromatic (60-100 cm/pixel) and multi-spectral (4 bands for red, green, blue, and near infra-red at 2.4-4 m/pixel) imagery. It is also not clear how the different spectral bands should be combined. In this paper, we investigate the retrieval performance of several different texture representations using multi-spectral satellite images from IKONOS. A query-by-example framework, along with a manually chosen ground truth dataset, allows different combinations of texture representations and spectral bands to be compared. We focus on the specific problem of retrieving inhabited regions from images of urban and rural scenes. Preliminary results show that (1) the use of all spectral bands improves the retrieval performance, and (2) co-occurrence, wavelet and Gabor texture features perform comparably.

  14. Automatic system for radar echoes filtering based on textural features and artificial intelligence

    Science.gov (United States)

    Hedir, Mehdia; Haddad, Boualem

    2016-11-01

    Among the very popular Artificial Intelligence (AI) techniques, Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been retained to process Ground Echoes (GE) on meteorological radar images taken from Setif (Algeria) and Bordeaux (France) with different climates and topologies. To achieve this task, AI techniques were associated with textural approaches. We used Gray Level Co-occurrence Matrix (GLCM) and Completed Local Binary Pattern (CLBP); both methods were largely used in image analysis. The obtained results show the efficiency of texture to preserve precipitations forecast on both sites with the accuracy of 98% on Bordeaux and 95% on Setif despite the AI technique used. 98% of GE are suppressed with SVM, this rate is outperforming ANN skills. CLBP approach associated to SVM eliminates 98% of GE and preserves precipitations forecast on Bordeaux site better than on Setif's, while it exhibits lower accuracy with ANN. SVM classifier is well adapted to the proposed application since the average filtering rate is 95-98% with texture and 92-93% with CLBP. These approaches allow removing Anomalous Propagations (APs) too with a better accuracy of 97.15% with texture and SVM. In fact, textural features associated to AI techniques are an efficient tool for incoherent radars to surpass spurious echoes.

  15. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA

    2016-05-01

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system

  16. Feature Extraction and Selection From the Perspective of Explosive Detection

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used

  17. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2015-01-01

    Full Text Available The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI. In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII. The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  18. Classification of high spatial resolution imagery using optimal Gabor-filters-based texture features

    Science.gov (United States)

    Zhao, Yindi; Wu, Bo

    2007-06-01

    Texture analysis has received great attention in the interpretation of high-resolution satellite images. This paper aims to find optimal filters for discriminating between residential areas and other land cover types in high spatial resolution satellite imagery. Moreover, in order to reduce the blurring border effect, inherent in texture analysis and which introduces important errors in the transition areas between different texture units, a classification procedure is designed for such high spatial resolution satellite images as follows. Firstly, residential areas are detected using Gabor texture features, and two clusters, one a residential area and the other not, are detected using the fuzzy C-Means algorithm, in the frequency space based on Gabor filters. Sequentially, a mask is generated to eliminate residential areas so that other land-cover types would be classified accurately, and not interfered with the spectrally heterogeneous residential areas. Afterwards, other objects are classified using spectral features by the MAP (maximum a posterior) - ICM (iterated conditional mode) classification algorithm designed to enforce the spatial constraints into classification. Experimental results on high spatial resolution remote sensing data confirm that the proposed algorithm provide remarkably better detection accuracy than conventional approaches in terms of both objective measurements and visual evaluation.

  19. Feature extraction for structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois [Los Alamos National Laboratory; Farrar, Charles [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Nishio, Mayuko [UNIV OF TOKYO; Worden, Keith [UNIV OF SHEFFIELD; Takeda, Nobuo [UNIV OF TOKYO

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  20. A multi-approach feature extractions for iris recognition

    Science.gov (United States)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  1. Probability mapping of scarred myocardium using texture and intensity features in CMR images

    Science.gov (United States)

    2013-01-01

    Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280

  2. Fixed kernel regression for voltammogram feature extraction

    Science.gov (United States)

    Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.

    2009-12-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.

  3. Automatic Melody Generation System with Extraction Feature

    Science.gov (United States)

    Ida, Kenichi; Kozuki, Shinichi

    In this paper, we propose the melody generation system with the analysis result of an existing melody. In addition, we introduce the device that takes user's favor in the system. The melody generation is done by pitch's being arranged best on the given rhythm. The best standard is decided by using the feature element extracted from existing music by proposed method. Moreover, user's favor is reflected in the best standard by operating some of the feature element in users. And, GA optimizes the pitch array based on the standard, and achieves the system.

  4. Touching Textured Surfaces: Cells in Somatosensory Cortex Respond Both to Finger Movement and to Surface Features

    Science.gov (United States)

    Darian-Smith, Ian; Sugitani, Michio; Heywood, John; Karita, Keishiro; Goodwin, Antony

    1982-11-01

    Single neurons in Brodmann's areas 3b and 1 of the macaque postcentral gyrus discharge when the monkey rubs the contralateral finger pads across a textured surface. Both the finger movement and the spatial pattern of the surface determine this discharge in each cell. The spatial features of the surface are represented unambiguously only in the responses of populations of these neurons, and not in the responses of the constituent cells.

  5. Online Feature Extraction Algorithms for Data Streams

    Science.gov (United States)

    Ozawa, Seiichi

    Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.

  6. Testing Texture of VHR Panchromatic Data as a Feature of Land Cover Classification

    Science.gov (United States)

    Lewiński, Stanisław; Aleksandrowicz, Sebastian; Banaszkiewicz, Marek

    2015-04-01

    While it is well-known that texture can be used to classify very high resolution (VHR) data, the limits of its applicability have not been unequivocally specified. This study examines whether it is possible to divide satellite images into two classes associated with "low" and "high" texture values in the initial stage of processing VHR images. This approach can be effectively used in object-oriented classification. Based on the panchromatic channel of KOMPSAT-2 images from five areas of Europe, datasets with down-sampled pixel resolutions of 1, 2, 4, 8, and 16 m were prepared. These images were processed using different texture analysis techniques in order to discriminate between basic land cover classes. Results were assessed using the normalized feature space distance expressed by the Jeffries-Matusita distance. The best results were observed for images with the highest resolution processed by the Laplacian filter. Our research shows that a classification approach based on the idea of "low" and "high" textures can be effectively applied to panchromatic data with a resolution of 8 m or higher.

  7. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  8. A Kind of Visual Speech Feature with the Geometric and Local Inner Texture Description

    Directory of Open Access Journals (Sweden)

    Yanfeng Sun

    2013-02-01

    Full Text Available In this paper, we propose a type of joint feature with geometric parameters and color moments to represent the speaking-mouth frames for image-based visual speech synthesis systems. Based on FDP around the mouth area, the geometric feature is obtained by computing Euclidean distances to describe the width of the speaking mouth, the height of the outer and inner lips and the distances between them. The color moment component in the joint feature is obtained by calculating the texture between the upper and lower inner lips to describe the visibility state of the teeth. Through analyzing the accordance between the teeth visibility and the components of RGB and HSV color space based on the samples separately, we discovered that green and blue components are good at describing the change of teeth visibility. The experiments show that the proposed joint feature can effectively provide the basis for categorizing the different speaking states especially at the sense of lip shapes and tooth visibility. The evaluation of clustering results is done by analyzing the derived parameters of the silhouette function.  The analyzing results prove that comparing with the geometric only and PCA, our proposed feature together with the shape and the local inner lip texture clues has better performance in improving the similarity between samples within the clusters. In the future, more expressive features with the shape and local texture information should be explored to increase the proportion of similar samples within the clusters to improve the descriptive ability of speaking mouths.

  9. Personal Identification with F ace Bio metrics using Co lor Local Texture Features

    Directory of Open Access Journals (Sweden)

    Vani A.Hiremani

    2013-06-01

    Full Text Available Facerecognition (FR has received a significantinterest in pattern recognition and computer visiondue to the wide range of applications includingvideo surveillance, biometric identification, and faceindexing in multimedia contents. Recently, localtexturefeatures have gained reputation as powerfulface descriptors because they are believed to bemore robust to variations of facial pose, expression,occlusion, etc. In particular,local binary pattern(LBPtexture feature hasproven to be highlydiscriminative for FR due to different levels oflocality. Hence, it is proposedto employ thesefeatures along withcolor local texture feature forefficient FR system.The personal identificationaccuracy with face modality using color localtexture featuresis around 97% is achieved

  10. Extracting Canopy Surface Texture from Airborne Laser Scanning Data for the Supervised and Unsupervised Prediction of Area-Based Forest Characteristics

    Directory of Open Access Journals (Sweden)

    Mikko T. Niemi

    2016-07-01

    Full Text Available Area-based analyses of airborne laser scanning (ALS data are an established approach to obtain wall-to-wall predictions of forest characteristics for vast areas. The analyses of sparse data in particular are based on the height value distributions, which do not produce optimal information on the horizontal forest structure. We evaluated the complementary potential of features quantifying the textural variation of ALS-based canopy height models (CHMs for both supervised (linear regression and unsupervised (k-Means clustering analyses. Based on a comprehensive literature review, we identified a total of four texture analysis methods that produced rotation-invariant features of different order and scale. The CHMs and the textural features were derived from practical sparse-density, leaf-off ALS data originally acquired for ground elevation modeling. The features were extracted from a circular window of 254 m2 and related with boreal forest characteristics observed from altogether 155 field sample plots. Features based on gray-level histograms, distribution of forest patches, and gray-level co-occurrence matrices were related with plot volume, basal area, and mean diameter with coefficients of determination (R2 of up to 0.63–0.70, whereas features that measured the uniformity of local binary patterns of the CHMs performed poorer. Overall, the textural features compared favorably with benchmark features based on the point data, indicating that the textural features contain additional information useful for the prediction of forest characteristics. Due to the developed processing routines for raster data, the CHM features may potentially be extracted with a lower computational burden, which promotes their use for applications such as pre-stratification or guiding the field plot sampling based solely on ALS data.

  11. A Local Texture-Based Superpixel Feature Coding for Saliency Detection Combined with Global Saliency

    Directory of Open Access Journals (Sweden)

    Bingfei Nan

    2015-12-01

    Full Text Available Because saliency can be used as the prior knowledge of image content, saliency detection has been an active research area in image segmentation, object detection, image semantic understanding and other relevant image-based applications. In the case of saliency detection from cluster scenes, the salient object/region detected needs to not only be distinguished clearly from the background, but, preferably, to also be informative in terms of complete contour and local texture details to facilitate the successive processing. In this paper, a Local Texture-based Region Sparse Histogram (LTRSH model is proposed for saliency detection from cluster scenes. This model uses a combination of local texture patterns and color distribution as well as contour information to encode the superpixels to characterize the local feature of image for region contrast computing. Combining the region contrast as computed with the global saliency probability, a full-resolution salient map, in which the salient object/region detected adheres more closely to its inherent feature, is obtained on the bases of the corresponding high-level saliency spatial distribution as well as on the pixel-level saliency enhancement. Quantitative comparisons with five state-of-the-art saliency detection methods on benchmark datasets are carried out, and the comparative results show that the method we propose improves the detection performance in terms of corresponding measurements.

  12. Support vector machine model for diagnosing pneumoconiosis based on wavelet texture features of digital chest radiographs.

    Science.gov (United States)

    Zhu, Biyun; Chen, Hui; Chen, Budong; Xu, Yan; Zhang, Kuan

    2014-02-01

    This study aims to explore the classification ability of decision trees (DTs) and support vector machines (SVMs) to discriminate between the digital chest radiographs (DRs) of pneumoconiosis patients and control subjects. Twenty-eight wavelet-based energy texture features were calculated at the lung fields on DRs of 85 healthy controls and 40 patients with stage I and stage II pneumoconiosis. DTs with algorithm C5.0 and SVMs with four different kernels were trained by samples with two combinations of the texture features to classify a DR as of a healthy subject or of a patient with pneumoconiosis. All of the models were developed with fivefold cross-validation, and the final performances of each model were compared by the area under receiver operating characteristic (ROC) curve. For both SVM (with a radial basis function kernel) and DT (with algorithm C5.0), areas under ROC curves (AUCs) were 0.94 ± 0.02 and 0.86 ± 0.04 (P = 0.02) when using the full feature set and 0.95 ± 0.02 and 0.88 ± 0.04 (P = 0.05) when using the selected feature set, respectively. When built on the selected texture features, the SVM with a polynomial kernel showed a higher diagnostic performance with an AUC value of 0.97 ± 0.02 than SVMs with a linear kernel, a radial basis function kernel and a sigmoid kernel with AUC values of 0.96 ± 0.02 (P = 0.37), 0.95 ± 0.02 (P = 0.24), and 0.90 ± 0.03 (P = 0.01), respectively. The SVM model with a polynomial kernel built on the selected feature set showed the highest diagnostic performance among all tested models when using either all the wavelet texture features or the selected ones. The model has a good potential in diagnosing pneumoconiosis based on digital chest radiographs.

  13. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    Science.gov (United States)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  14. Trace Ratio Criterion for Feature Extraction in Classification

    Directory of Open Access Journals (Sweden)

    Guoqi Li

    2014-01-01

    Full Text Available A generalized linear discriminant analysis based on trace ratio criterion algorithm (GLDA-TRA is derived to extract features for classification. With the proposed GLDA-TRA, a set of orthogonal features can be extracted in succession. Each newly extracted feature is the optimal feature that maximizes the trace ratio criterion function in the subspace orthogonal to the space spanned by the previous extracted features.

  15. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  16. An easy to use ArcMap based texture analysis program for extraction of flooded areas from TerraSAR-X satellite image

    Science.gov (United States)

    Pradhan, Biswajeet; Hagemann, Ulrike; Shafapour Tehrany, Mahyat; Prechtel, Nikolas

    2014-02-01

    Extraction of the flooded areas from synthetic aperture radar (SAR) and especially TerraSAR-X data is one of the most challenging tasks in the flood management and planning. SAR data due to its high spatial resolution and its capability of all weather conditions makes a proper choice for tropical countries. Texture is considered as an effective factor in distinguishing the classes especially in SAR imagery which records the backscatters that carry information of kind, direction, heterogeneity and relationship of the features. This paper put forward a computer program for texture analysis for high resolution radar data. Texture analysis program is introduced and discussed using the gray-level co-occurrence matrix (GLCM). To demonstrate the ability and correctness of this program, a test subset of TerraSAR-X imagery from Terengganu area, Malaysia was analyzed and pixel-based and object-based classification were attempted. The thematic maps derived by pixel-based method could not achieve acceptable visual interpretation and for that reason no accuracy assessment was performed on them. The overall accuracy achieved by object-based method was 83.63% with kappa coefficient of 0.8. Results on image texture classification showed that the proposed program is capable for texture analysis in TerraSAR-X image and the obtained textural analysis resulted in high classification accuracy. The proposed texture analysis program can be used in many applications such as land use/cover (LULC) mapping, hazard studies and many other applications.

  17. Segmentation of anatomical branching structures based on texture features and conditional random field

    Science.gov (United States)

    Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin

    2012-02-01

    This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.

  18. Multi-fractal texture features for brain tumor and edema segmentation

    Science.gov (United States)

    Reza, S.; Iftekharuddin, K. M.

    2014-03-01

    In this work, we propose a fully automatic brain tumor and edema segmentation technique in brain magnetic resonance (MR) images. Different brain tissues are characterized using the novel texture features such as piece-wise triangular prism surface area (PTPSA), multi-fractional Brownian motion (mBm) and Gabor-like textons, along with regular intensity and intensity difference features. Classical Random Forest (RF) classifier is used to formulate the segmentation task as classification of these features in multi-modal MRIs. The segmentation performance is compared with other state-of-art works using a publicly available dataset known as Brain Tumor Segmentation (BRATS) 2012 [1]. Quantitative evaluation is done using the online evaluation tool from Kitware/MIDAS website [2]. The results show that our segmentation performance is more consistent and, on the average, outperforms other state-of-the art works in both training and challenge cases in the BRATS competition.

  19. Detection of sub-kilometer craters in high resolution planetary images using shape and texture features

    Science.gov (United States)

    Bandeira, Lourenço; Ding, Wei; Stepinski, Tomasz F.

    2012-01-01

    Counting craters is a paramount tool of planetary analysis because it provides relative dating of planetary surfaces. Dating surfaces with high spatial resolution requires counting a very large number of small, sub-kilometer size craters. Exhaustive manual surveys of such craters over extensive regions are impractical, sparking interest in designing crater detection algorithms (CDAs). As a part of our effort to design a CDA, which is robust and practical for planetary research analysis, we propose a crater detection approach that utilizes both shape and texture features to identify efficiently sub-kilometer craters in high resolution panchromatic images. First, a mathematical morphology-based shape analysis is used to identify regions in an image that may contain craters; only those regions - crater candidates - are the subject of further processing. Second, image texture features in combination with the boosting ensemble supervised learning algorithm are used to accurately classify previously identified candidates into craters and non-craters. The design of the proposed CDA is described and its performance is evaluated using a high resolution image of Mars for which sub-kilometer craters have been manually identified. The overall detection rate of the proposed CDA is 81%, the branching factor is 0.14, and the overall quality factor is 72%. This performance is a significant improvement over the previous CDA based exclusively on the shape features. The combination of performance level and computational efficiency offered by this CDA makes it attractive for practical application.

  20. Entropy-based adaptive nuclear texture features are independent prognostic markers in a total population of uterine sarcomas.

    Science.gov (United States)

    Nielsen, Birgitte; Hveem, Tarjei Sveinsgjerd; Kildal, Wanja; Abeler, Vera M; Kristensen, Gunnar B; Albregtsen, Fritz; Danielsen, Håvard E

    2015-04-01

    Nuclear texture analysis measures the spatial arrangement of the pixel gray levels in a digitized microscopic nuclear image and is a promising quantitative tool for prognosis of cancer. The aim of this study was to evaluate the prognostic value of entropy-based adaptive nuclear texture features in a total population of 354 uterine sarcomas. Isolated nuclei (monolayers) were prepared from 50 µm tissue sections and stained with Feulgen-Schiff. Local gray level entropy was measured within small windows of each nuclear image and stored in gray level entropy matrices, and two superior adaptive texture features were calculated from each matrix. The 5-year crude survival was significantly higher (P Entropy-based adaptive nuclear texture was an independent prognostic marker for crude survival in multivariate analysis including relevant clinicopathological features (HR = 2.1, P = 0.001), and should therefore be considered as a potential prognostic marker in uterine sarcomas.

  1. Classification Features of US Images Liver Extracted with Co-occurrence Matrix Using the Nearest Neighbor Algorithm

    Science.gov (United States)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita; Nicolae, Mariana Carmen

    2011-12-01

    Co-occurrence matrix has been applied successfully for echographic images characterization because it contains information about spatial distribution of grey-scale levels in an image. The paper deals with the analysis of pixels in selected regions of interest of an US image of the liver. The useful information obtained refers to texture features such as entropy, contrast, dissimilarity and correlation extract with co-occurrence matrix. The analyzed US images were grouped in two distinct sets: healthy liver and steatosis (or fatty) liver. These two sets of echographic images of the liver build a database that includes only histological confirmed cases: 10 images of healthy liver and 10 images of steatosis liver. The healthy subjects help to compute four textural indices and as well as control dataset. We chose to study these diseases because the steatosis is the abnormal retention of lipids in cells. The texture features are statistical measures and they can be used to characterize irregularity of tissues. The goal is to extract the information using the Nearest Neighbor classification algorithm. The K-NN algorithm is a powerful tool to classify features textures by means of grouping in a training set using healthy liver, on the one hand, and in a holdout set using the features textures of steatosis liver, on the other hand. The results could be used to quantify the texture information and will allow a clear detection between health and steatosis liver.

  2. Extraction of photomultiplier-pulse features

    Energy Technology Data Exchange (ETDEWEB)

    Joerg, Philipp; Baumann, Tobias; Buechele, Maximilian; Fischer, Horst; Gorzellik, Matthias; Grussenmeyer, Tobias; Herrmann, Florian; Kremser, Paul; Kunz, Tobias; Michalski, Christoph; Schopferer, Sebastian; Szameitat, Tobias [Physikalisches Institut der Universitaet Freiburg, Freiburg im Breisgau (Germany)

    2013-07-01

    Experiments in subatomic physics have to handle data rates at several MHz per readout channel to reach statistical significance for the measured quantities. Frequently such experiments have to deal with fast signals which may cover large dynamic ranges. For applications which require amplitude as well as time measurements with highest accuracy transient recorders with very high resolution and deep on-board memory are the first choice. We have built a 16-channel 12- or 14 bit single unit VME64x/VXS sampling ADC module which may sample at rates up to 1GS/s. Fast algorithms have been developed and successfully implemented for the readout of the recoil-proton detector at the COMPASS-II Experiment at CERN. We report on the implementation of the feature extraction algorithms and the performance achieved during a pilot with the COMPASS-II Experiment.

  3. Concrete Slump Classification using GLCM Feature Extraction

    Science.gov (United States)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  4. Classification of Land-use Based on Remote Sensing Image Texture Features with Multi-scales and Cardinal Direction Inspired by Domain Knowledge

    Directory of Open Access Journals (Sweden)

    LAN Zeying

    2016-08-01

    Full Text Available Texture features based on grey level co-occurrence matrix (GLCM are effective for image analysis, and this paper proposed a new method to construct GLCM with multi-scales and cardinal direction factors inspired by domain knowledge, in order to improve the performance of texture features and solve the uncertainty problems in image classification of land-use. By simulating the process of human visual interpretation, an integrated computation pattern of GIS and RS data were performed. Firstly, on the basis of image registration, some classic GIS spatial data mining algorithms were employed to asymptotically extract domain morphological knowledge; Next, under the responding mechanism derived from correlated analysis, an algorithm for establishing GLCM multi-scale windows that can match categories one by one, an algorithm for determining GLCM weighted cardinal direction windows that can describe observation orientation were designed based on relevant morphology indexes. Experimental results indicate that, there is a strong correlation between domain morphological knowledge and GLCM construction factors, meanwhile, with lower computational complexity, the new method can extract stable texture features to describe actual spatial meanings of complex objects, thereby improve the image classification accuracy of land-use.

  5. The extraction of depth structure from shading and texture in the macaque brain.

    Directory of Open Access Journals (Sweden)

    Koen Nelissen

    Full Text Available We used contrast-agent enhanced functional magnetic resonance imaging (fMRI in the alert monkey to map the cortical regions involved in the extraction of 3D shape from the monocular static cues, texture and shading. As in the parallel human imaging study, we contrasted the 3D condition to several 2D control conditions. The extraction of 3D shape from texture (3D SfT involves both ventral and parietal regions, in addition to early visual areas. Strongest activation was observed in CIP, with decreasing strength towards the anterior part of the intraparietal sulcus (IPS. In the ventral stream 3D SfT sensitivity was observed in a ventral portion of TEO. The extraction of 3D shape from shading (3D SfS involved predominantly ventral regions, such as V4 and a dorsal potion of TEO. These results are similar to those obtained earlier in human subjects and indicate that the extraction of 3D shape from texture is performed in both ventral and dorsal regions for both species, as are the motion and disparity cues, whereas shading is mainly processed in the ventral stream.

  6. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    Science.gov (United States)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  7. Image Analysis of Soil Micromorphology: Feature Extraction, Segmentation, and Quality Inference

    Directory of Open Access Journals (Sweden)

    Petros Maragos

    2004-06-01

    Full Text Available We present an automated system that we have developed for estimation of the bioecological quality of soils using various image analysis methodologies. Its goal is to analyze soilsection images, extract features related to their micromorphology, and relate the visual features to various degrees of soil fertility inferred from biochemical characteristics of the soil. The image methodologies used range from low-level image processing tasks, such as nonlinear enhancement, multiscale analysis, geometric feature detection, and size distributions, to object-oriented analysis, such as segmentation, region texture, and shape analysis.

  8. FAST DISCRETE CURVELET TRANSFORM BASED ANISOTROPIC FEATURE EXTRACTION FOR IRIS RECOGNITION

    Directory of Open Access Journals (Sweden)

    Amol D. Rahulkar

    2010-11-01

    Full Text Available The feature extraction plays a very important role in iris recognition. Recent researches on multiscale analysis provide good opportunity to extract more accurate information for iris recognition. In this work, a new directional iris texture features based on 2-D Fast Discrete Curvelet Transform (FDCT is proposed. The proposed approach divides the normalized iris image into six sub-images and the curvelet transform is applied independently on each sub-image. The anisotropic feature vector for each sub-image is derived using the directional energies of the curvelet coefficients. These six feature vectors are combined to create the resultant feature vector. During recognition, the nearest neighbor classifier based on Euclidean distance has been used for authentication. The effectiveness of the proposed approach has been tested on two different databases namely UBIRIS and MMU1. Experimental results show the superiority of the proposed approach.

  9. HMM based Offline Signature Verification system using ContourletTransform and Textural features

    Directory of Open Access Journals (Sweden)

    K N PUSHPALATHA

    2014-07-01

    Full Text Available Handwritten signatures occupy a very special place in the identification of an individual and it is a challenging task because of the possible variations in directions and shapes of the constituent strokes of written samples. In this paper we investigated offline verifications system based on fusion of contourlet transform, directional features and Hidden Markov Model (HMM as classifier. The handwritten signature image is preprocessed for noise removal and a two level contourlet transform is applied to get feature vector. The textural features are computed and concatenated with coefficients of contourlet transform to form the final feature vector. A two level contourlet transform is applied to get feature vector after the signature images of both query and database are preprocessed for noise removal. The classification results are computed using HTK tool with HMM classifier. The experimental results are computed using GPDS-960 database images to get the parameters like False Rejection Rate (FRR, False Acceptance Rate (FAR and Total Success Rate (TSR. The results show that the values of FRR and FAR are improved compared to the existing algorithm.

  10. Rapid extraction of image texture by co-occurrence using a hybrid data structure

    Science.gov (United States)

    Clausi, David A.; Zhao, Yongping

    2002-07-01

    Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.

  11. Classifying spatially heterogeneous wetland communities using machine learning algorithms and spectral and textural features.

    Science.gov (United States)

    Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot

    2015-05-01

    Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features.

  12. Zone-size nonuniformity of {sup 18}F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Nai-Ming [Chang Gung Memorial Hospital and Chang Gung University, Departments of Nuclear Medicine, Taiyuan (China); Chang Gung Memorial Hospital, Department of Nuclear Medicine, Keelung (China); National Tsing Hua University, Department of Biomedical Engineering and Environmental Sciences, Hsinchu (China); Fang, Yu-Hua Dean [Chang Gung University, Department of Electrical Engineering, Taiyuan (China); Lee, Li-yu [Chang Gung University College of Medicine, Department of Pathology, Chang Gung Memorial Hospital, Taoyuan (China); Chang, Joseph Tung-Chieh; Tsan, Din-Li [Chang Gung University College of Medicine, Department of Radiation Oncology, Chang Gung Memorial Hospital, Taoyuan (China); Ng, Shu-Hang [Chang Gung University College of Medicine, Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Taoyuan (China); Wang, Hung-Ming [Chang Gung University College of Medicine, Division of Hematology/Oncology, Department of Internal Medicine, Chang Gung Memorial Hospital, Taoyuan (China); Liao, Chun-Ta [Chang Gung University College of Medicine, Department of Otolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Taoyuan (China); Yang, Lan-Yan [Chang Gung Memorial Hospital, Biostatistics Unit, Clinical Trial Center, Taoyuan (China); Hsu, Ching-Han [National Tsing Hua University, Department of Biomedical Engineering and Environmental Sciences, Hsinchu (China); Yen, Tzu-Chen [Chang Gung Memorial Hospital and Chang Gung University, Departments of Nuclear Medicine, Taiyuan (China); Chang Gung University College of Medicine, Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taipei (China)

    2014-10-23

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUV{sub max} 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment {sup 18}F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUV{sub max} 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification. (orig.)

  13. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial, and Textural Information Extracted from IKONOS Imagery and Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-07-01

    Full Text Available The textural and spatial information extracted from very high resolution (VHR remote sensing imagery provides complementary information for applications in which the spectral information is not sufficient for identification of spectrally similar landscape features. In this study grey-level co-occurrence matrix (GLCM textures and a local statistical analysis Getis statistic (Gi, computed from IKONOS multispectral (MS imagery acquired from the Yellow River Delta in China, along with a random forest (RF classifier, were used to discriminate Robina pseudoacacia tree health levels. Specifically, eight GLCM texture features (mean, variance, homogeneity, dissimilarity, contrast, entropy, angular second moment, and correlation were first calculated from IKONOS NIR band (Band 4 to determine an optimal window size (13 × 13 and an optimal direction (45°. Then, the optimal window size and direction were applied to the three other IKONOS MS bands (blue, green, and red for calculating the eight GLCM textures. Next, an optimal distance value (5 and an optimal neighborhood rule (Queen’s case were determined for calculating the four Gi features from the four IKONOS MS bands. Finally, different RF classification results of the three forest health conditions were created: (1 an overall accuracy (OA of 79.5% produced using the four MS band reflectances only; (2 an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3 an OA of 93.3% created with the all 32 GLCM features calculated from the four IKONOS MS bands with a window size of 13 × 13 and direction of 45°; (4 an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen’s neighborhood rule; and (5 an OA of 96.9% created with the combined 16 spectral (four, spatial (four, and textural (eight features. The most important feature ranked by RF

  14. An Adequate Approach to Image Retrieval Based on Local Level Feature Extraction

    Directory of Open Access Journals (Sweden)

    Sumaira Muhammad Hayat Khan

    2010-10-01

    Full Text Available Image retrieval based on text annotation has become obsolete and is no longer interesting for scientists because of its high time complexity and low precision in results. Alternatively, increase in the amount of digital images has generated an excessive need for an accurate and efficient retrieval system. This paper proposes content based image retrieval technique at a local level incorporating all the rudimentary features. Image undergoes the segmentation process initially and each segment is then directed to the feature extraction process. The proposed technique is also based on image?s content which primarily includes texture, shape and color. Besides these three basic features, FD (Fourier Descriptors and edge histogram descriptors are also calculated to enhance the feature extraction process by taking hold of information at the boundary. Performance of the proposed method is found to be quite adequate when compared with the results from one of the best local level CBIR (Content Based Image Retrieval techniques.

  15. SU-E-J-256: Predicting Metastasis-Free Survival of Rectal Cancer Patients Treated with Neoadjuvant Chemo-Radiotherapy by Data-Mining of CT Texture Features of Primary Lesions

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, H; Wang, J; Shen, L; Hu, W; Wan, J; Zhou, Z; Zhang, Z [Fudan University Shanghai Cancer Center, Shanghai (China)

    2015-06-15

    Purpose: The purpose of this study is to investigate the relationship between computed tomographic (CT) texture features of primary lesions and metastasis-free survival for rectal cancer patients; and to develop a datamining prediction model using texture features. Methods: A total of 220 rectal cancer patients treated with neoadjuvant chemo-radiotherapy (CRT) were enrolled in this study. All patients underwent CT scans before CRT. The primary lesions on the CT images were delineated by two experienced oncologists. The CT images were filtered by Laplacian of Gaussian (LoG) filters with different filter values (1.0–2.5: from fine to coarse). Both filtered and unfiltered images were analyzed using Gray-level Co-occurrence Matrix (GLCM) texture analysis with different directions (transversal, sagittal, and coronal). Totally, 270 texture features with different species, directions and filter values were extracted. Texture features were examined with Student’s t-test for selecting predictive features. Principal Component Analysis (PCA) was performed upon the selected features to reduce the feature collinearity. Artificial neural network (ANN) and logistic regression were applied to establish metastasis prediction models. Results: Forty-six of 220 patients developed metastasis with a follow-up time of more than 2 years. Sixtyseven texture features were significantly different in t-test (p<0.05) between patients with and without metastasis, and 12 of them were extremely significant (p<0.001). The Area-under-the-curve (AUC) of ANN was 0.72, and the concordance index (CI) of logistic regression was 0.71. The predictability of ANN was slightly better than logistic regression. Conclusion: CT texture features of primary lesions are related to metastasisfree survival of rectal cancer patients. Both ANN and logistic regression based models can be developed for prediction.

  16. Forest Fire Smoke Recognition Based on Color and Texture Features%基于颜色和纹理特征的林火烟雾识别

    Institute of Scientific and Technical Information of China (English)

    兰久强; 刘金清; 刘引; 吴庆祥

    2016-01-01

    Aiming to implement the intelligent early warning of forest fire, a method based on color and texture features was proposed for forest fire smoke recognition. First, the color features were used to determine the smoke suspected area. Besides, the local binary pattern variance (LBPV) was utilized to extract the irregular feature of texture in the suspected area, and the LBP images was got. Wavelet transform was then used to extract the fuzzy, complex and correlative features from LBP images. At last, the fire smoke was identified by support vector machine (SVM). The result demonstrated that the method based on color and texture features has good recognition of forest fire smoke, which provides an effective solution for the study of forest fire smoke recognition.%为了实现森林火灾的智能化预警,提出了基于颜色和纹理特征的林火烟雾识别方法.首先使用颜色特征确定烟雾疑似区域,随后采用局部二值模式方差(Local Binary Pattern Variance,LBPV)提取疑似区域纹理的不规则度特征并产生LBP图像.然后利用小波变换分解LBP图像并提取模糊度、复杂度和相关度特征.最后利用支持向量机(Support Vector Machine,SVM)进行烟雾识别.结果证明,颜色结合纹理特征方法可以有效实现林火烟雾的识别,为林火烟雾识别研究提供了一种有效方案.

  17. 基于OPenCV哈密瓜纹理特征的提取%The Image Texture Extraction of Hami Melon Based on OpenCV

    Institute of Scientific and Technical Information of China (English)

    肖文东; 马本学

    2011-01-01

    采用400~1000 nm高光谱成像仪获取哈密瓜高光谱图像,并选取哈密瓜纹理特征图像,在VC++6.0环境下利用开源计算机视觉库OpenCV进行编程,通过对图像的形态学去噪、平滑处理、消除背景以及二值化处理,获取哈密瓜纹理的二值化图像,实现了对哈密瓜高光谱图像纹理特征的提取.实验结果表明:该方法可以快速有效地实现哈密瓜纹理特征提取.%The hyper-spectral image of Hami melon was obtained and the image of Hami melon texture extraction was selectecl by using the hyper-spectral imaging system (400 nm and 1000 nm). Through morphology denoising,irnage smoothing, binary background eliminating and binaryzing processing based on VC++ and the open source computer vision library OpenCV, the binary image of Hami melon texture was obtained. Then the Hami melon hyperspectral image texture feature extraction was realized in VC++6.0 programming environment. Experimental results show that the methods can achieve the Hami melon texture extraction effectively.

  18. Computer Graphics Meets Image Fusion: the Power of Texture Baking to Simultaneously Visualise 3d Surface Features and Colour

    Science.gov (United States)

    Verhoeven, G. J.

    2017-08-01

    Since a few years, structure-from-motion and multi-view stereo pipelines have become omnipresent in the cultural heritage domain. The fact that such Image-Based Modelling (IBM) approaches are capable of providing a photo-realistic texture along the threedimensional (3D) digital surface geometry is often considered a unique selling point, certainly for those cases that aim for a visually pleasing result. However, this texture can very often also obscure the underlying geometrical details of the surface, making it very hard to assess the morphological features of the digitised artefact or scene. Instead of constantly switching between the textured and untextured version of the 3D surface model, this paper presents a new method to generate a morphology-enhanced colour texture for the 3D polymesh. The presented approach tries to overcome this switching between objects visualisations by fusing the original colour texture data with a specific depiction of the surface normals. Whether applied to the original 3D surface model or a lowresolution derivative, this newly generated texture does not solely convey the colours in a proper way but also enhances the smalland large-scale spatial and morphological features that are hard or impossible to perceive in the original textured model. In addition, the technique is very useful for low-end 3D viewers, since no additional memory and computing capacity are needed to convey relief details properly. Apart from simple visualisation purposes, the textured 3D models are now also better suited for on-surface interpretative mapping and the generation of line drawings.

  19. HEURISTICAL FEATURE EXTRACTION FROM LIDAR DATA AND THEIR VISUALIZATION

    OpenAIRE

    Ghosh., S; B. Lohani

    2012-01-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clu...

  20. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  1. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  2. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  3. Textural features of Holocene perennial saline lake deposits of the Taoudenni Agorgott basin, northern Mali

    Science.gov (United States)

    Mees, F.

    1999-08-01

    The Holocene salt lake deposits of the Taoudenni-Agorgott basin, northern Mali, mainly consist of sediments with a high glauberite (Na 2Ca(SO 4) 2) content. The remainder of the deposits largely consists of salt beds with a bloedite (Na 2Mg(SO 4) 2·4H 2O), thenardite (Na 2SO 4) or halite (NaCl) composition. A petrographical study of the deposits demonstrates that they formed in a perennial lake that experienced a gradual decrease in water depth. Textural features of the glauberite-dominated deposits are found to be related to water depth, through the control that this factor exerts on the sensitivity of the lake to changes in water supply and to short-term variations in evaporation rates. In this way, layering — due to variations in glauberite content and crystal size — is inferred to be typical of deposits that formed in shallow water, whereas unstratified deposits are the product of high lake level stages. Halite textures are found to be indicative of the place within the water column where crystal growth occurred (along the lake bottom or higher), which is mainly determined by water depth and partly by evaporation rates. The oldest halite beds are largely unaltered cumulate deposits, whereas the youngest layers developed exclusively through bottom growth. The basal part of one thick halite bed at a level between these two groups of halite layers developed by an alternation of both types of growth, in response to variations in evaporation rates. Variations in mineralogical composition between and within the salt beds that formed during the earliest periods with a higher salinity, up to the first stage with halite formation, record a change in lake water chemistry with time but they are in one instance also determined by an early diagenetic mineral transformation.

  4. Urban Built-Up Area Extraction from Landsat TM/ETM+ Images Using Spectral Information and Multivariate Texture

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2014-08-01

    Full Text Available Urban built-up area information is required by various applications. However, urban built-up area extraction using moderate resolution satellite data, such as Landsat series data, is still a challenging task due to significant intra-urban heterogeneity and spectral confusion with other land cover types. In this paper, a new method that combines spectral information and multivariate texture is proposed. The multivariate textures are separately extracted from multispectral data using a multivariate variogram with different distance measures, i.e., Euclidean, Mahalanobis and spectral angle distances. The multivariate textures and the spectral bands are then combined for urban built-up area extraction. Because the urban built-up area is the only target class, a one-class classifier, one-class support vector machine, is used. For comparison, the classical gray-level co-occurrence matrix (GLCM is also used to extract image texture. The proposed method was evaluated using bi-temporal Landsat TM/ETM+ data of two megacity areas in China. Results demonstrated that the proposed method outperformed the use of spectral information alone and the joint use of the spectral information and the GLCM texture. In particular, the inclusion of multivariate variogram textures with spectral angle distance achieved the best results. The proposed method provides an effective way of extracting urban built-up areas from Landsat series images and could be applicable to other applications.

  5. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    Science.gov (United States)

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2016-09-12

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  6. Wear Debris Identification Using Feature Extraction and Neural Network

    Institute of Scientific and Technical Information of China (English)

    王伟华; 马艳艳; 殷勇辉; 王成焘

    2004-01-01

    A method and results of identification of wear debris using their morphological features are presented. The color images of wear debris were used as initial data. Each particle was characterized by a set of numerical parameters combined by its shape, color and surface texture features through a computer vision system. Those features were used as input vector of artificial neural network for wear debris identification. A radius basis function (RBF) network based model suitable for wear debris recognition was established,and its algorithm was presented in detail. Compared with traditional recognition methods, the RBF network model is faster in convergence, and higher in accuracy.

  7. Fusion of Pixel-based and Object-based Features for Road Centerline Extraction from High-resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    CAO Yungang

    2016-10-01

    Full Text Available A novel approach for road centerline extraction from high spatial resolution satellite imagery is proposed by fusing both pixel-based and object-based features. Firstly, texture and shape features are extracted at the pixel level, and spectral features are extracted at the object level based on multi-scale image segmentation maps. Then, extracted multiple features are utilized in the fusion framework of Dempster-Shafer evidence theory to roughly identify the road network regions. Finally, an automatic noise removing algorithm combined with the tensor voting strategy is presented to accurately extract the road centerline. Experimental results using high-resolution satellite imageries with different scenes and spatial resolutions showed that the proposed approach compared favorably with the traditional methods, particularly in the aspect of eliminating the salt noise and conglutination phenomenon.

  8. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  9. Light extraction efficiency of GaN-based LED with pyramid texture by using ray path analysis.

    Science.gov (United States)

    Pan, Jui-Wen; Wang, Chia-Shen

    2012-09-10

    We study three different gallium-nitride (GaN) based light emitting diode (LED) cases based on the different locations of the pyramid textures. In case 1, the pyramid texture is located on the sapphire top surface, in case 2, the pyramid texture is locate on the P-GaN top surface, while in case 3, the pyramid texture is located on both the sapphire and P-GaN top surfaces. We study the relationship between the light extraction efficiency (LEE) and angle of slant of the pyramid texture. The optimization of total LEE was highest for case 3 among the three cases. Moreover, the seven escape paths along which most of the escaped photon flux propagated were selected in a simulation of the LEDs. The seven escape paths were used to estimate the slant angle for the optimization of LEE and to precisely analyze the photon escape path.

  10. Handwritten Character Classification using the Hotspot Feature Extraction Technique

    NARCIS (Netherlands)

    Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2012-01-01

    Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it

  11. Improved Framework for Breast Cancer Detection using Hybrid Feature Extraction Technique and FFNN

    Directory of Open Access Journals (Sweden)

    Ibrahim Mohamed Jaber Alamin

    2016-10-01

    Full Text Available Breast Cancer early detection using terminologies of image processing is suffered from the less accuracy performance in different automated medical tools. To improve the accuracy, still there are many research studies going on different phases such as segmentation, feature extraction, detection, and classification. The proposed framework is consisting of four main steps such as image preprocessing, image segmentation, feature extraction and finally classification. This paper presenting the hybrid and automated image processing based framework for breast cancer detection. For image preprocessing, both Laplacian and average filtering approach is used for smoothing and noise reduction if any. These operations are performed on 256 x 256 sized gray scale image. Output of preprocessing phase is used at efficient segmentation phase. Algorithm is separately designed for preprocessing step with goal of improving the accuracy. Segmentation method contributed for segmentation is nothing but the improved version of region growing technique. Thus breast image segmentation is done by using proposed modified region growing technique. The modified region growing technique overcoming the limitations of orientation as well as intensity. The next step we proposed is feature extraction, for this framework we have proposed to use combination of different types of features such as texture features, gradient features, 2D-DWT features with higher order statistics (HOS. Such hybrid feature set helps to improve the detection accuracy. For last phase, we proposed to use efficient feed forward neural network (FFNN. The comparative study between existing 2D-DWT feature extraction and proposed HOS-2D-DWT based feature extraction methods is proposed.

  12. Analytical Study of Feature Extraction Techniques in Opinion Mining

    Directory of Open Access Journals (Sweden)

    Pravesh Kumar Singh

    2013-07-01

    Full Text Available Although opinion mining is in a nascent stage of de velopment but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines etc. This paper is an attempt to appraise the vario us techniques of feature extraction. The first part discusses various techniques and second part m akes a detailed appraisal of the major techniques used for feature extraction.

  13. Efficient sparse kernel feature extraction based on partial least squares.

    Science.gov (United States)

    Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John

    2009-08-01

    The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.

  14. Auto-Segmentation of Head and Neck Cancer using Textural features

    DEFF Research Database (Denmark)

    Hollensen, Christian; Jørgensen, Peter Stanley; Højgaard, Liselotte;

    - and intra observer variability. Several automatic segmentation methods have been developed using positron emission tomography (PET) and/or computerised tomography (CT). The aim of the present study is to develop a model for 3-dimensional auto-segmentation, the level set method, to contour gross tumour...... inside and outside the GTV respectively to choose an appropriate feature combination for segmentation of the GTV. The feature combination with the highest dissimilarity was extracted on PET and CT images from the remaining 25 HNC patients. Using these features as input for a level set segmentation method...... the tumours were segmented automatically. Segmentation results were evaluated against manual contours of radiologists using the DICE coefficient, and sensitivity. The result of the level set approach method was compared with threshold segmentation of PET standard uptake value (SUV) of 3 or 20% of maximal...

  15. WE-E-17A-05: Complementary Prognostic Value of CT and 18F-FDG PET Non-Small Cell Lung Cancer Tumor Heterogeneity Features Quantified Through Texture Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Desseroit, M; Cheze Le Rest, C; Tixier, F [CHU Poitiers Poitiers (France); INSERM LaTIM UMR 1101, Brest (France); Majdoub, M; Visvikis, D; Hatt, M [INSERM LaTIM UMR 1101, Brest (France); Guillevin, R; Perdrisot, R [CHU Poitiers Poitiers (France)

    2014-06-15

    Purpose: Previous studies have shown that CT or 18F-FDG PET intratumor heterogeneity features computed using texture analysis may have prognostic value in Non-Small Cell Lung Cancer (NSCLC), but have been mostly investigated separately. The purpose of this study was to evaluate the potential added value with respect to prognosis regarding the combination of non-enhanced CT and 18F-FDG PET heterogeneity textural features on primary NSCLC tumors. Methods: One hundred patients with non-metastatic NSCLC (stage I–III), treated with surgery and/or (chemo)radiotherapy, that underwent staging 18F-FDG PET/CT images, were retrospectively included. Morphological tumor volumes were semi-automatically delineated on non-enhanced CT using 3D SlicerTM. Metabolically active tumor volumes (MATV) were automatically delineated on PET using the Fuzzy Locally Adaptive Bayesian (FLAB) method. Intratumoral tissue density and FDG uptake heterogeneities were quantified using texture parameters calculated from co-occurrence, difference, and run-length matrices. In addition to these textural features, first order histogram-derived metrics were computed on the whole morphological CT tumor volume, as well as on sub-volumes corresponding to fine, medium or coarse textures determined through various levels of LoG-filtering. Association with survival regarding all extracted features was assessed using Cox regression for both univariate and multivariate analysis. Results: Several PET and CT heterogeneity features were prognostic factors of overall survival in the univariate analysis. CT histogram-derived kurtosis and uniformity, as well as Low Grey-level High Run Emphasis (LGHRE), and PET local entropy were independent prognostic factors. Combined with stage and MATV, they led to a powerful prognostic model (p<0.0001), with median survival of 49 vs. 12.6 months and a hazard ratio of 3.5. Conclusion: Intratumoral heterogeneity quantified through textural features extracted from both CT and FDG PET

  16. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  17. Screening Mississippi River Levees Using Texture-Based and Polarimetric-Based Features from Synthetic Aperture Radar Data

    Directory of Open Access Journals (Sweden)

    Lalitha Dabbiru

    2017-03-01

    Full Text Available This article reviews the use of synthetic aperture radar remote sensing data for earthen levee mapping with an emphasis on finding the slump slides on the levees. Earthen levees built on the natural levees parallel to the river channel are designed to protect large areas of populated and cultivated land in the Unites States from flooding. One of the signs of potential impending levee failure is the appearance of slump slides. On-site inspection of levees is expensive and time-consuming; therefore, a need to develop efficient techniques based on remote sensing technologies is mandatory to prevent failures under flood loading. Analysis of multi-polarized radar data is one of the viable tools for detecting the problem areas on the levees. In this study, we develop methods to detect anomalies on the levee, such as slump slides and give levee managers new tools to prioritize their tasks. This paper presents results of applying the National Aeronautics and Space Administration (NASA Jet Propulsion Lab (JPL’s Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR quad-polarized L-band data to detect slump slides on earthen levees. The study area encompasses a portion of levees of the lower Mississippi River in the United States. In this paper, we investigate the performance of polarimetric and texture features for efficient levee classification. Texture features derived from the gray level co-occurrence (GLCM matrix and discrete wavelet transform were computed and analyzed for efficient levee classification. The pixel-based polarimetric decomposition features, such as entropy, anisotropy, and scattering angle were also computed and applied to the support vector machine classifier to characterize the radar imagery and compared the results with texture-based classification. Our experimental results showed that inclusion of textural features derived from the SAR data using the discrete wavelet transform (DWT features and GLCM features provided

  18. SU-D-BRA-06: Dual-Energy Chest CT: The Effects of Virtual Monochromatic Reconstructions On Texture Analysis Features

    Energy Technology Data Exchange (ETDEWEB)

    Sorensen, J; Duran, C; Stingo, F; Wei, W; Rao, A; Zhang, L; Court, L; Erasmus, J; Godoy, M [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: To characterize the effect of virtual monochromatic reconstructions on several commonly used texture analysis features in DECT of the chest. Further, to assess the effect of monochromatic energy levels on the ability of these textural features to identify tissue types. Methods: 20 consecutive patients underwent chest CTs for evaluation of lung nodules using Siemens Somatom Definition Flash DECT. Virtual monochromatic images were constructed at 10keV intervals from 40–190keV. For each patient, an ROI delineated the lesion under investigation, and cylindrical ROI’s were placed within 5 different healthy tissues (blood, fat, muscle, lung, and liver). Several histogram- and Grey Level Cooccurrence Matrix (GLCM)-based texture features were then evaluated in each ROI at each energy level. As a means of validation, these feature values were then used in a random forest classifier to attempt to identify the tissue types present within each ROI. Their predictive accuracy at each energy level was recorded. Results: All textural features changed considerably with virtual monochromatic energy, particularly below 70keV. Most features exhibited a global minimum or maximum around 80keV, and while feature values changed with energy above this, patient ranking was generally unaffected. As expected, blood demonstrated the lowest inter-patient variability, for all features, while lung lesions (encompassing many different pathologies) exhibited the highest. The accuracy of these features in identifying tissues (76% accuracy) was highest at 80keV, but no clear relationship between energy and classification accuracy was found. Two common misclassifications (blood vs liver and muscle vs fat) accounted for the majority (24 of the 28) errors observed. Conclusion: All textural features were highly dependent on virtual monochromatic energy level, especially below 80keV, and were more stable above this energy. However, in a random forest model, these commonly used features were

  19. Evaluation of PET texture features with heterogeneous phantoms: complementarity and effect of motion and segmentation method

    Science.gov (United States)

    Carles, M.; Torres-Espallardo, I.; Alberich-Bayarri, A.; Olivas, C.; Bello, P.; Nestle, U.; Martí-Bonmatí, L.

    2017-01-01

    A major source of error in quantitative PET/CT scans of lung cancer tumors is respiratory motion. Regarding the variability of PET texture features (TF), the impact of respiratory motion has not been properly studied with experimental phantoms. The primary aim of this work was to evaluate the current use of PET texture analysis for heterogeneity characterization in lesions affected by respiratory motion. Twenty-eight heterogeneous lesions were simulated by a mixture of alginate and 18 F-fluoro-2-deoxy-D-glucose (FDG). Sixteen respiratory patterns were applied. Firstly, the TF response for different heterogeneous phantoms and its robustness with respect to the segmentation method were calculated. Secondly, the variability for TF derived from PET image with (gated, G-) and without (ungated, U-) motion compensation was analyzed. Finally, TF complementarity was assessed. In the comparison of TF derived from the ideal contour with respect to TF derived from 40%-threshold and adaptive-threshold PET contours, 7/8 TF showed strong linear correlation (LC) (p    0.75), despite a significant volume underestimation. Independence of lesion movement (LC in 100% of the combined pairs of movements, p  text{V}} ) resulted in {{C}\\text{V}} (WH)  =  0.18 on the U-image and {{C}\\text{V}} (WH)  =  0.24, {{C}\\text{V}} (ENG)  =  0.15, {{C}\\text{V}} (LH)  =  0.07 and {{C}\\text{V}} (ENT)  =  0.06 on the G-image. Apart from WH (r  >  0.9, p  <  0.001), not one of these TF has shown LC with C max. Complementarity was observed for the TF pairs: ENG-LH, CONT (contrast)-ENT and LH-ENT. In conclusion, the effect of respiratory motion should be taken into account when the heterogeneity of lung cancer is quantified on PET/CT images. Despite inaccurate volume delineation, TF derived from 40% and COA contours could be reliable for their prognostic use. The TF that exhibited simultaneous added value and independence of lesion

  20. GIS-based multifractal/inversion methods for feature extraction and applications in anomaly identification for mineral exploration

    Science.gov (United States)

    Li, Qingmou

    Mineralization is often intertwined with other processes spatially. This makes it difficult to extract features for mineral exploration. However, the existing techniques are far from adequate in support of this purpose. A series of multifractal feature extraction techniques in spatial, Walsh, eigenspace domains and other methods were developed in a GIS environment for mineral prospecting in this thesis. Techniques in spatial domain including spatial moments, gradient parameters, and local singularity are reviewed and implemented with the emphases on singularity analysis which extracts features on the basis of local self-similarity and spatial association property. A new multifractal method (W-A) was developed in the Walsh domain. W-A model is demonstrated to be advantageous for extracting abruptly change features. This advantage comes from its square wave functions of Walsh transformation (WT). A new multifractal singular-value decomposition (MSVD) model is developed on the basis of scale invariance in eigen-space for features extraction. The eigenimage and power spectrum structure of the studied area are investigated. The extracted feature using MSVD method characterizes rich textures, particularly capable of extracting weak and subtle features from data with strong influence of background and (fault) sharp change values. New Gauss inversion (GI) and hierarchical decomposition methods have been developed for distinguishing probability density function (PDF) from mixing populations. The forward modeling, the least square (LS) segmentation, and the GI are compared. These methods were used in estimating the spatial and entropy distributions. These features have rich textures portraying underground intrusions that are related with the hydrothermal mineral alteration in the study area. The data from southwestern Nova Scotia, Canada, were processed. The results have shown that the features extracted using the techniques developed are associated with a prior mineral

  1. Extracting Conceptual Feature Structures from Text

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Lassen, Tine;

    2011-01-01

    This paper describes an approach to indexing texts by their conceptual content using ontologies along with lexico-syntactic information and semantic role assignment provided by lexical resources. The conceptual content of meaningful chunks of text is transformed into conceptual feature structures...

  2. OUT-OF-FOCUS REGION SEGMENTATION OF 2D SURFACE IMAGES WITH THE USE OF TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    K. Anding

    2015-09-01

    Full Text Available A segmentation method of out-of-focus image regions for processed metal surfaces, based on focus textural features is proposed. Such regions contain small amount of useful information. The object of study is a metal surface, which has a cone shape. Some regions of images are blurred because the depth of field of industrial cameras is limited. Automatic removal of out-of-focus regions in such images is one of the possible solutions to this problem. Focus texture features were used to calculate characteristics that describe the sharpness of particular image area. Such features are used in autofocus systems of microscopes and cameras, and their application for segmentation of out-of-focus regions of images is unusual. Thirty-four textural features were tested on a set of metal surface images with out-of-focus regions. The most useful features, usable for segmentation of an image more accurately, are an average grey level and spatial frequency. Proposed segmentation method of out-of-focus image regions for metal surfaces can be successfully applied for evaluation of processing quality of materials with the use of industrial cameras. The method has simple implementation and high calculating speed.

  3. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  4. Feature extraction and classifcation in surface grading application using multivariate statistical projection models

    Science.gov (United States)

    Prats-Montalbán, José M.; López, Fernando; Valiente, José M.; Ferrer, Alberto

    2007-01-01

    In this paper we present an innovative way to simultaneously perform feature extraction and classification for the quality control issue of surface grading by applying two well known multivariate statistical projection tools (SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and predictions computed from the extracted projection models. Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to compute accuracy estimates and study the factors effect. The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These results outperform those obtained in a previous work where the soft color texture descriptors in combination with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.

  5. [RVM supervised feature extraction and Seyfert spectra classification].

    Science.gov (United States)

    Li, Xiang-Ru; Hu, Zhan-Yi; Zhao, Yong-Heng; Li, Xiao-Ming

    2009-06-01

    With recent technological advances in wide field survey astronomy and implementation of several large-scale astronomical survey proposals (e. g. SDSS, 2dF and LAMOST), celestial spectra are becoming very abundant and rich. Therefore, research on automated classification methods based on celestial spectra has been attracting more and more attention in recent years. Feature extraction is a fundamental problem in automated spectral classification, which not only influences the difficulty and complexity of the problem, but also determines the performance of the designed classifying system. The available methods of feature extraction for spectra classification are usually unsupervised, e. g. principal components analysis (PCA), wavelet transform (WT), artificial neural networks (ANN) and Rough Set theory. These methods extract features not by their capability to classify spectra, but by some kind of power to approximate the original celestial spectra. Therefore, the extracted features by these methods usually are not the best ones for classification. In the present work, the authors pointed out the necessary to investigate supervised feature extraction by analyzing the characteristics of the spectra classification research in available literature and the limitations of unsupervised feature extracting methods. And the authors also studied supervised feature extracting based on relevance vector machine (RVM) and its application in Seyfert spectra classification. RVM is a recently introduced method based on Bayesian methodology, automatic relevance determination (ARD), regularization technique and hierarchical priors structure. By this method, the authors can easily fuse the information in training data, the authors' prior knowledge and belief in the problem, etc. And RVM could effectively extract the features and reduce the data based on classifying capability. Extensive experiments show its superior performance in dimensional reduction and feature extraction for Seyfert

  6. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  7. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  8. 基于LBP纹理特征的随机游走图像分割%Image segmentation with random walker based on LBP texture features

    Institute of Scientific and Technical Information of China (English)

    郭艳蓉; 蒋建国; 郝世杰; 詹曙; 李鸿

    2013-01-01

    本文通过求解融入纹理特征信息的对称、半正定线性方程组,提出一种新的基于随机游走(Random Walker)的纹理图像分割算法.为了构造该方程组,首先通过局部二元模式(Local binary pattern,简称LBP)算子来描述纹理,将图像映射至不同纹理之间有显著区别的LBP图(LBP map)上,进而将其与梯度和几何信息结合并构造倒数型像素相似度,形成方程所需的权值矩阵,在随机游走模型下使已标号区域向未知区域传递,从而实现纹理图像分割.最后以纹理图像、噪声合成图像、MRI、CT图像为实验对象来验证算法的有效性.定性及定量实验结果表明,在多目标分割任务下,本方法有更好的有效性和精确性.%In this paper,we propose a new random walker model for texture image segmentation through solving a symmetric,semi-positive-definite system of linear equations equipped with the texture information.In the construction of the equations,we perform the feature extraction based on Local Binary Pattern (LBP) and map the original image into the space where textures are distinguished from each other (called as LBP map).The similarity between the pixels is then constructed by combining the LBP,gradient and geometric feature in a reciprocal fashion.These similarities are formed as the edge weights of the graph,which helps the labels of the seeds propagate into the unlabeled regions during the random walker process.Experiments on the texture images,synthetic noise images and medical images shows that the proposed segmentation method extends the state-of-art random walker segmentation to texture images successfully and outperforms some other texture segmentation algorithms particularly on multi-label problem based on the qualitative and quantitative results.

  9. A Novel Feature Extraction Scheme for Medical X-Ray Images

    Directory of Open Access Journals (Sweden)

    Prachi.G.Bhende

    2016-02-01

    Full Text Available X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray images belonging to IRMA (Image Retrieval in Medical applications database that can be used to perform reliable matching between different views of an object or scene. GLCM represents the distributions of the intensities and the information about relative positions of neighboring pixels of an image. The LBP features are invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A HOG feature vector represents local shape of an object, having edge information at plural cells. These features have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.

  10. Solid waste bin level detection using gray level co-occurrence matrix feature extraction approach.

    Science.gov (United States)

    Arebey, Maher; Hannan, M A; Begum, R A; Basri, Hassan

    2012-08-15

    This paper presents solid waste bin level detection and classification using gray level co-occurrence matrix (GLCM) feature extraction methods. GLCM parameters, such as displacement, d, quantization, G, and the number of textural features, are investigated to determine the best parameter values of the bin images. The parameter values and number of texture features are used to form the GLCM database. The most appropriate features collected from the GLCM are then used as inputs to the multi-layer perceptron (MLP) and the K-nearest neighbor (KNN) classifiers for bin image classification and grading. The classification and grading performance for DB1, DB2 and DB3 features were selected with both MLP and KNN classifiers. The results demonstrated that the KNN classifier, at KNN = 3, d = 1 and maximum G values, performs better than using the MLP classifier with the same database. Based on the results, this method has the potential to be used in solid waste bin level classification and grading to provide a robust solution for solid waste bin level detection, monitoring and management.

  11. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  12. Heuristical Feature Extraction from LIDAR Data and Their Visualization

    Science.gov (United States)

    Ghosh, S.; Lohani, B.

    2011-09-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clusters and simplify them to be sent to a visualization engine.

  13. A Color-Texture Based Segmentation Method To Extract Object From Background

    Directory of Open Access Journals (Sweden)

    Saka Kezia

    2013-03-01

    Full Text Available Extraction of flower regions from complex background is a difficult task, it is an important part of flower image retrieval, and recognition .Image segmentation denotes a process of partitioning an image into distinct regions. A large variety of different segmentation approaches for images have been developed. Image segmentation plays an important role in image analysis. According to several authors, segmentation terminates when the observer’s goal is satisfied. For this reason, a unique method that can be applied to all possible cases does not yet exist. This paper studies the flower image segmentation in complex background. Based on the visual characteristics differences of the flower and the surrounding objects, the flower from different backgrounds are separated into a single set of flower image pixels. The segmentation methodology on flower images consists of five steps. Firstly, the original image of RGB space is transformed into Lab color space. In the second step ‘a’ component of Lab color space is extracted. Then segmentation by two-dimension OTSU of automatic threshold in ‘a-channel’ is performed. Based on the color segmentation result, and the texture differences between the background image and the required object, we extract the object by the gray level co-occurrence matrix for texture segmentation. The GLCMs essentially represent the joint probability of occurrence of grey-levels for pixels with a given spatial relationship in a defined region. Finally, the segmentation result is corrected by mathematical morphology methods. The algorithm was tested on plague image database and the results prove to be satisfactory. The algorithm was also tested on medical images for nucleus segmentation.

  14. A Texture Thesaurus for Browsing Large Aerial Photographs.

    Science.gov (United States)

    Ma, Wei-Ying; Manjunath, B. S.

    1998-01-01

    Presents a texture-based image-retrieval system for browsing large-scale aerial photographs. System components include texture-feature extraction, image segmentation and grouping, learning-similarity measure, and a texture-thesaurus model for fast search and indexing. Testing has demonstrated the system's effectiveness in searching and selecting…

  15. Topographic Feature Extraction for Bengali and Hindi Character Images

    CERN Document Server

    Bag, Soumen; 10.5121/sipij.2011.2215

    2011-01-01

    Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR) etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West). We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shape-based graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi...

  16. Spoken Language Identification Using Hybrid Feature Extraction Methods

    CERN Document Server

    Kumar, Pawan; Mishra, A N; Chandra, Mahesh

    2010-01-01

    This paper introduces and motivates the use of hybrid robust feature extraction technique for spoken language identification (LID) system. The speech recognizers use a parametric form of a signal to get the most important distinguishable features of speech signal for recognition task. In this paper Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP) along with two hybrid features are used for language Identification. Two hybrid features, Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were obtained from combination of MFCC and PLP. Two different classifiers, Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) were used for classification. The experiment shows better identification rate using hybrid feature extraction techniques compared to conventional feature extraction methods.BFCC has shown better performance than MFCC with both classifiers. RPLP along with GMM has shown be...

  17. Robo-Psychophysics: Extracting Behaviorally Relevant Features from the Output of Sensors on a Prosthetic Finger.

    Science.gov (United States)

    Delhaye, Benoit P; Schluter, Erik W; Bensmaia, Sliman J

    2016-01-01

    Efforts are underway to restore sensorimotor function in amputees and tetraplegic patients using anthropomorphic robotic hands. For this approach to be clinically viable, sensory signals from the hand must be relayed back to the patient. To convey tactile feedback necessary for object manipulation, behaviorally relevant information must be extracted in real time from the output of sensors on the prosthesis. In the present study, we recorded the sensor output from a state-of-the-art bionic finger during the presentation of different tactile stimuli, including punctate indentations and scanned textures. Furthermore, the parameters of stimulus delivery (location, speed, direction, indentation depth, and surface texture) were systematically varied. We developed simple decoders to extract behaviorally relevant variables from the sensor output and assessed the degree to which these algorithms could reliably extract these different types of sensory information across different conditions of stimulus delivery. We then compared the performance of the decoders to that of humans in analogous psychophysical experiments. We show that straightforward decoders can extract behaviorally relevant features accurately from the sensor output and most of them outperform humans.

  18. Usefulness of texture features for segmentation of lungs with severe diffuse interstitial lung disease

    Science.gov (United States)

    Wang, Jiahui; Li, Feng; Li, Qiang

    2010-03-01

    We developed an automated method for the segmentation of lungs with severe diffuse interstitial lung disease (DILD) in multi-detector CT. In this study, we would like to compare the performance levels of this method and a thresholdingbased segmentation method for normal lungs, moderately abnormal lungs, severely abnormal lungs, and all lungs in our database. Our database includes 31 normal cases and 45 abnormal cases with severe DILD. The outlines of lungs were manually delineated by a medical physicist and confirmed by an experienced chest radiologist. These outlines were used as reference standards for the evaluation of the segmentation results. We first employed a thresholding technique for CT value to obtain initial lungs, which contain normal and mildly abnormal lung parenchyma. We then used texture-feature images derived from co-occurrence matrix to further segment lung regions with severe DILD. The segmented lung regions with severe DILD were combined with the initial lungs to generate the final segmentation results. We also identified and removed the airways to improve the accuracy of the segmentation results. We used three metrics, i.e., overlap, volume agreement, and mean absolute distance (MAD) between automatically segmented lung and reference lung to evaluate the performance of our segmentation method and the thresholding-based segmentation method. Our segmentation method achieved a mean overlap of 96.1%, a mean volume agreement of 98.1%, and a mean MAD of 0.96 mm for the 45 abnormal cases. On the other hand the thresholding-based segmentation method achieved a mean overlap of 94.2%, a mean volume agreement of 95.8%, and a mean MAD of 1.51 mm for the 45 abnormal cases. Our new method obtained higher performance level than the thresholding-based segmentation method.

  19. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension, Markov random field features, and the complex directional field features

    Science.gov (United States)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-10-01

    In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.

  20. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  1. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  2. Fingerprint Identification - Feature Extraction, Matching and Database Search

    NARCIS (Netherlands)

    Bazen, Asker Michiel

    2002-01-01

    Presents an overview of state-of-the-art fingerprint recognition technology for identification and verification purposes. Three principal challenges in fingerprint recognition are identified: extracting robust features from low-quality fingerprints, matching elastically deformed fingerprints and eff

  3. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  4. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  5. A harmonic linear dynamical system for prominent ECG feature extraction.

    Science.gov (United States)

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  6. A Harmonic Linear Dynamical System for Prominent ECG Feature Extraction

    Directory of Open Access Journals (Sweden)

    Ngoc Anh Nguyen Thi

    2014-01-01

    Full Text Available Unsupervised mining of electrocardiography (ECG time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  7. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features

    Science.gov (United States)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-04-01

    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  8. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    Science.gov (United States)

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  9. An introductory analysis of digital infrared thermal imaging guided oral cancer detection using multiresolution rotation invariant texture features

    Science.gov (United States)

    Chakraborty, M.; Das Gupta, R.; Mukhopadhyay, S.; Anjum, N.; Patsa, S.; Ray, J. G.

    2017-03-01

    This manuscript presents an analytical treatment on the feasibility of multi-scale Gabor filter bank response for non-invasive oral cancer pre-screening and detection in the long infrared spectrum. Incapability of present healthcare technology to detect oral cancer in budding stage manifests in high mortality rate. The paper contributes a step towards automation in non-invasive computer-aided oral cancer detection using an amalgamation of image processing and machine intelligence paradigms. Previous works have shown the discriminative difference of facial temperature distribution between a normal subject and a patient. The proposed work, for the first time, exploits this difference further by representing the facial Region of Interest(ROI) using multiscale rotation invariant Gabor filter bank responses followed by classification using Radial Basis Function(RBF) kernelized Support Vector Machine(SVM). The proposed study reveals an initial increase in classification accuracy with incrementing image scales followed by degradation of performance; an indication that addition of more and more finer scales tend to embed noisy information instead of discriminative texture patterns. Moreover, the performance is consistently better for filter responses from profile faces compared to frontal faces.This is primarily attributed to the ineptness of Gabor kernels to analyze low spatial frequency components over a small facial surface area. On our dataset comprising of 81 malignant, 59 pre-cancerous, and 63 normal subjects, we achieve state-of-the-art accuracy of 85.16% for normal v/s precancerous and 84.72% for normal v/s malignant classification. This sets a benchmark for further investigation of multiscale feature extraction paradigms in IR spectrum for oral cancer detection.

  10. Feature Extraction by Wavelet Decomposition of Surface

    Directory of Open Access Journals (Sweden)

    Prashant Singh

    2010-07-01

    Full Text Available The paper presents a new approach to surface acoustic wave (SAW chemical sensor array design and data processing for recognition of volatile organic compounds (VOCs based on transient responses. The array is constructed of variable thickness single polymer-coated SAW oscillator sensors. The thickness of polymer coatings are selected such that during the sensing period, different sensors are loaded with varied levels of diffusive inflow of vapour species due to different stages of termination of equilibration process. Using a single polymer for coating the individual sensors with different thickness introduces vapour-specific kinetics variability in transient responses. The transient shapes are analysed by wavelet decomposition based on Daubechies mother wavelets. The set of discrete wavelet transform (DWT approximation coefficients across the array transients is taken to represent the vapour sample in two alternate ways. In one, the sets generated by all the transients are combined into a single set to give a single representation to the vapour. In the other, the set of approximation coefficients at each data point generated by all transients is taken to represent the vapour. The latter results in as many alternate representations as there are approximation coefficients. The alternate representations of a vapour sample are treated as different instances or realisations for further processing. The wavelet analysis is then followed by the principal component analysis (PCA to create new feature space. A comparative analysis of the feature spaces created by both the methods leads to the conclusion that both methods yield complimentary information: the one reveals intrinsic data variables, and the other enhances class separability. The present approach is validated by generating synthetic transient response data based on a prototype polyisobutylene (PIB coated 3-element SAW sensor array exposed to 7 VOC vapours: chloroform, chlorobenzene o

  11. Applying Feature Extraction for Classification Problems

    Directory of Open Access Journals (Sweden)

    Foon Chi

    2009-03-01

    Full Text Available With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and the proliferation of cheap, high quality digital cameras it isbecoming ever more desirable to be able to automatically classify images into appropriate categories such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention.However, as with most Artificial Intelligence (A.I. methods it is seen as necessary to take small steps towards your goal. With this in mind a method is proposed here to represent localised features using disjoint sub-images taken from several datasets of retinal images for their eventual use in an incremental learning system. A tile-based localised adaptive threshold selection method was taken for vessel segmentation based on separate colour components. Arteriole-venous differentiation was made possible by using the composite of these components and high quality fundal images. Performance was evaluated on the DRIVE and STARE datasets achieving average specificity of 0.9379 and sensitivity of 0.5924.

  12. Multispectral and Texture Feature Application in Image-Object Analysis of Summer Vegetation in Eastern Tajikistan Pamirs

    Directory of Open Access Journals (Sweden)

    Eric Ariel L. Salas

    2016-01-01

    Full Text Available We tested the Moment Distance Index (MDI in combination with texture features for the summer vegetation mapping in the eastern Pamir Mountains, Tajikistan using the 2014 Landsat OLI (Operational Land Imager image. The five major classes identified were sparse vegetation, medium-dense vegetation, dense vegetation, barren land, and water bodies. By utilizing object features in a random forest (RF classifier, the overall classification accuracy of the land cover maps were 92% using a set of variables including texture features and MDI, and 84% using a set of variables including texture but without MDI. A decrease of the Kappa statistics, from 0.89 to 0.79, was observed when MDI was removed from the set of predictor variables. McNemar’s test showed that the increase in the classification accuracy due to the addition of MDI was statistically significant (p < 0.05. The proposed method provides an effective way of discriminating sparse vegetation from barren land in an arid environment, such as the Pamir Mountains.

  13. Hydrolysis Profiles of Formalin Fixed Paraffin-Embedded Tumors Based on IOD (Integrated Optical Density and Nuclear Texture Feature Measurements

    Directory of Open Access Journals (Sweden)

    Margareta Fležar

    1999-01-01

    Full Text Available The aim of the study was to determine optimal hydrolysis time for the Feulgen DNA staining of archival formalin fixed paraffin‐embedded surgical samples, prepared as single cell suspensions for image cytometric measurements. The nuclear texture features along with the IOD (integrated optical density of the tumor nuclei were analysed by an automated high resolution image cytometer as a function of duration of hydrolysis treatment (in 5 N HCl at room temperature. Tissue blocks of breast carcinoma, ovarian serous carcinoma, ovarian serous tumor of borderline malignancy and leiomyosarcoma were included in the study. IOD hydrolysis profiles showed plateau between 30 and 60 min in the breast carcinoma and leiomyosarcoma, and between 40 and 60 min in the ovarian serous carcinoma and ovarian serous tumor of borderline malignancy. Most of the nuclear texture features remained stable after 20 min of hydrolysis treatment. Our results indicate that the optimal hydrolysis time for IOD and for nuclear texture feature measurements, was between 40 and 60 min in the cell preparations from tissue blocks of three epithelial and one soft tissue tumor.

  14. Novel Moment Features Extraction for Recognizing Handwritten Arabic Letters

    Directory of Open Access Journals (Sweden)

    Gheith Abandah

    2009-01-01

    Full Text Available Problem statement: Offline recognition of handwritten Arabic text awaits accurate recognition solutions. Most of the Arabic letters have secondary components that are important in recognizing these letters. However these components have large writing variations. We targeted enhancing the feature extraction stage in recognizing handwritten Arabic text. Approach: In this study, we proposed a novel feature extraction approach of handwritten Arabic letters. Pre-segmented letters were first partitioned into main body and secondary components. Then moment features were extracted from the whole letter as well as from the main body and the secondary components. Using multi-objective genetic algorithm, efficient feature subsets were selected. Finally, various feature subsets were evaluated according to their classification error using an SVM classifier. Results: The proposed approach improved the classification error in all cases studied. For example, the improvements of 20-feature subsets of normalized central moments and Zernike moments were 15 and 10%, respectively. Conclusion/Recommendations: Extracting and selecting statistical features from handwritten Arabic letters, their main bodies and their secondary components provided feature subsets that give higher recognition accuracies compared to the subsets of the whole letters alone.

  15. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voronoi...

  16. EEG signal features extraction based on fractal dimension.

    Science.gov (United States)

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  17. Automated Breast Cancer Diagnosis based on GVF-Snake Segmentation, Wavelet Features Extraction and Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Abderrahim Sebri

    2007-01-01

    Full Text Available Breast cancer accounts for the second most cancer diagnoses among women and the second most cancer deaths in the world. In fact, more than 11000 women die each year, all over the world, because this disease. The automatic breast cancer diagnosis is a very important purpose of medical informatics researches. Some researches has been oriented to make automatic the diagnosis at the step of mammographic diagnosis, some others treated the problem at the step of cytological diagnosis. In this work, we describes the current state of the ongoing the BC automated diagnosis research program. It is a software system that provides expert diagnosis of breast cancer based on three step of cytological image analysis. The first step is based on segmentation using an active contour for cell tracking and isolating of the nucleus in the studied image. Then from this nucleus, have been extracted some textural features using the wavelet transforms to characterize image using its texture, so that malign texture can be differentiated from benign on the assumption that tumoral texture is different from the texture of other kinds of tissues. Finally, the obtained features will be introduced as the input vector of a Multi-Layer Perceptron (MLP, to classify the images into malign and benign ones.

  18. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    Science.gov (United States)

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  19. DSP Optimized Implementation of Co-occurrence Matrix Texture Feature%共生矩阵纹理特征的DSP优化实现

    Institute of Scientific and Technical Information of China (English)

    王职军; 梁光明; 刘任任; 徐克强; 谢俊

    2014-01-01

    灰度共生矩阵纹理特征具有计算复杂、耗费时间等问题,严重影响了程序执行的效率。针对此问题,分析了共生矩阵纹理特征的原理,研究了TMS320C6678 DSP的结构性能,提出了基于共生矩阵纹理特征的存取带宽和软件流水的优化方法,在CCS5.3软件平台下选择TMS320C6678对其进行了程序实现,最后使程序执行时间从1.94 ms减少到0.259 ms。实验结果表明,提出的优化方法能够缩减代码执行时间,提升代码性能,满足嵌入式图像处理系统的实际需要。%Texture features based on Gray Level Co-occurrence Matrix ( GLCM) has the drawbacks of complex calculations and time-consuming,which seriously affects the efficiency of program execution. Aiming at this problem,analyze the principle of the extracted tex-ture features based on GLCM in this paper,study on the performance of the TMS320C6678 DSP,propose the optimization method of ac-cess bandwidth and software pipelining based on GLCM texture feature,then achieve the optimization of GLCM-based texture features by using of TMS320C6678 in the CCS5. 3 software platform,so that the program execution time is reduced from 1. 94 ms to 0. 259 ms. Ex-periment results show that the proposed optimization method can shrink code execution time, improve code performance and meet the needs of embedded image processing system.

  20. Texture features on T2-weighted magnetic resonance imaging: new potential biomarkers for prostate cancer aggressiveness

    Science.gov (United States)

    Vignati, A.; Mazzetti, S.; Giannini, V.; Russo, F.; Bollito, E.; Porpiglia, F.; Stasi, M.; Regge, D.

    2015-04-01

    To explore contrast (C) and homogeneity (H) gray-level co-occurrence matrix texture features on T2-weighted (T2w) Magnetic Resonance (MR) images and apparent diffusion coefficient (ADC) maps for predicting prostate cancer (PCa) aggressiveness, and to compare them with traditional ADC metrics for differentiating low- from intermediate/high-grade PCas. The local Ethics Committee approved this prospective study of 93 patients (median age, 65 years), who underwent 1.5 T multiparametric endorectal MR imaging before prostatectomy. Clinically significant (volume ≥0.5 ml) peripheral tumours were outlined on histological sections, contoured on T2w and ADC images, and their pathological Gleason Score (pGS) was recorded. C, H, and traditional ADC metrics (mean, median, 10th and 25th percentile) were calculated on the largest lesion slice, and correlated with the pGS through the Spearman correlation coefficient. The area under the receiver operating characteristic curve (AUC) assessed how parameters differentiate pGS = 6 from pGS ≥ 7. The dataset included 49 clinically significant PCas with a balanced distribution of pGS. The Spearman ρ and AUC values on ADC were: -0.489, 0.823 (mean) -0.522, 0.821 (median) -0.569, 0.854 (10th percentile) -0.556, 0.854 (25th percentile) -0.386, 0.871 (C); 0.533, 0.923 (H); while on T2w they were: -0.654, 0.945 (C); 0.645, 0.962 (H). AUC of H on ADC and T2w, and C on T2w were significantly higher than that of the mean ADC (p = 0.05). H and C calculated on T2w images outperform ADC parameters in correlating with pGS and differentiating low- from intermediate/high-risk PCas, supporting the role of T2w MR imaging in assessing PCa biological aggressiveness.

  1. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  2. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  3. Image feature meaning for automatic key-frame extraction

    Science.gov (United States)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  4. Textural features of {sup 18}F-fluorodeoxyglucose positron emission tomography scanning in diagnosing aortic prosthetic graft infection

    Energy Technology Data Exchange (ETDEWEB)

    Saleem, Ben R.; Zeebregts, Clark J. [University of Groningen, University Medical Center Groningen, Department of Surgery, Division of Vascular Surgery, P.O. Box 30 001, Groningen (Netherlands); Beukinga, Roelof J.; Slart, Riemer H.J.A. [University of Groningen, University Medical Center Groningen, Nuclear Medicine and Molecular Imaging, Groningen (Netherlands); University of Twente, Department of Biomedical Photonic Imaging (BMPI), Enschede (Netherlands); Boellaard, Ronald; Glaudemans, Andor W.J.M. [University of Groningen, University Medical Center Groningen, Nuclear Medicine and Molecular Imaging, Groningen (Netherlands); Reijnen, Michel M.P.J. [Rijnstate Hospital, Department of Surgery, Arnhem (Netherlands)

    2017-05-15

    The clinical problem in suspected aortoiliac graft infection (AGI) is to obtain proof of infection. Although {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG) positron emission tomography scanning (PET) has been suggested to play a pivotal role, an evidence-based interpretation is lacking. The objective of this retrospective study was to examine the feasibility and utility of {sup 18}F-FDG uptake heterogeneity characterized by textural features to diagnose AGI. Thirty patients with a history of aortic graft reconstruction who underwent {sup 18}F-FDG PET/CT scanning were included. Sixteen patients were suspected to have an AGI (group I). AGI was considered proven only in the case of a positive bacterial culture. Positive cultures were found in 10 of the 16 patients (group Ia), and in the other six patients, cultures remained negative (group Ib). A control group was formed of 14 patients undergoing {sup 18}F-FDG PET for other reasons (group II). PET images were assessed using conventional maximal standardized uptake value (SUVmax), tissue-to-background ratio (TBR), and visual grading scale (VGS). Additionally, 64 different {sup 18}F-FDG PET based textural features were applied to characterize {sup 18}F-FDG uptake heterogeneity. To select candidate predictors, univariable logistic regression analysis was performed (α = 0.16). The accuracy was satisfactory in case of an AUC > 0.8. The feature selection process yielded the textural features named variance (AUC = 0.88), high grey level zone emphasis (AUC = 0.87), small zone low grey level emphasis (AUC = 0.80), and small zone high grey level emphasis (AUC = 0.81) most optimal for distinguishing between groups I and II. SUVmax, TBR, and VGS were also able to distinguish between these groups with AUCs of 0.87, 0.78, and 0.90, respectively. The textural feature named short run high grey level emphasis was able to distinguish group Ia from Ib (AUC = 0.83), while for the same task the TBR and VGS were not found to be predictive

  5. Feature extraction with LIDAR data and aerial images

    Science.gov (United States)

    Mao, Jianhua; Liu, Yanjing; Cheng, Penggen; Li, Xianhua; Zeng, Qihong; Xia, Jing

    2006-10-01

    Raw LIDAR data is a irregular spacing 3D point cloud including reflections from bare ground, buildings, vegetation and vehicles etc., and the first task of the data analyses of point cloud is feature extraction. However, the interpretability of LIDAR point cloud is often limited due to the fact that no object information is provided, and the complex earth topography and object morphology make it impossible for a single operator to classify all the point cloud precisely 100%. In this paper, a hierarchy method for feature extraction with LIDAR data and aerial images is discussed. The aerial images provide us information of objects figuration and spatial distribution, and hierarchic classification of features makes it easy to apply automatic filters progressively. And the experiment results show that, using this method, it was possible to detect more object information and get a better result of feature extraction than using automatic filters alone.

  6. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    Science.gov (United States)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  7. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  8. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  9. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    Science.gov (United States)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  10. Combination of 3D skin surface texture features and 2D ABCD features for improved melanoma diagnosis.

    Science.gov (United States)

    Ding, Yi; John, Nigel W; Smith, Lyndon; Sun, Jiuai; Smith, Melvyn

    2015-10-01

    Two-dimensional asymmetry, border irregularity, colour variegation and diameter (ABCD) features are important indicators currently used for computer-assisted diagnosis of malignant melanoma (MM); however, they often prove to be insufficient to make a convincing diagnosis. Previous work has demonstrated that 3D skin surface normal features in the form of tilt and slant pattern disruptions are promising new features independent from the existing 2D ABCD features. This work investigates that whether improved lesion classification can be achieved by combining the 3D features with the 2D ABCD features. Experiments using a nonlinear support vector machine classifier show that many combinations of the 2D ABCD features and the 3D features can give substantially better classification accuracy than using (1) single features and (2) many combinations of the 2D ABCD features. The best 2D and 3D feature combination includes the overall 3D skin surface disruption, the asymmetry and all the three colour channel features. It gives an overall 87.8 % successful classification, which is better than the best single feature with 78.0 % and the best 2D feature combination with 83.1 %. These demonstrate that (1) the 3D features have additive values to improve the existing lesion classification and (2) combining the 3D feature with all the 2D features does not lead to the best lesion classification. The two ABCD features not selected by the best 2D and 3D combination, namely (1) the border feature and (2) the diameter feature, were also studied in separate experiments. It found that inclusion of either feature in the 2D and 3D combination can successfully classify 3 out of 4 lesion groups. The only one group not accurately classified by either feature can be classified satisfactorily by the other. In both cases, they have shown better classification performances than those without the 3D feature in the combinations. This further demonstrates that (1) the 3D feature can be used to

  11. Fast SIFT design for real-time visual feature extraction.

    Science.gov (United States)

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.

  12. Local features for enhancement and minutiae extraction in fingerprints.

    Science.gov (United States)

    Fronthaler, Hartwig; Kollreider, Klaus; Bigun, Josef

    2008-03-01

    Accurate fingerprint recognition presupposes robust feature extraction which is often hampered by noisy input data. We suggest common techniques for both enhancement and minutiae extraction, employing symmetry features. For enhancement, a Laplacian-like image pyramid is used to decompose the original fingerprint into sub-bands corresponding to different spatial scales. In a further step, contextual smoothing is performed on these pyramid levels, where the corresponding filtering directions stem from the frequency-adapted structure tensor (linear symmetry features). For minutiae extraction, parabolic symmetry is added to the local fingerprint model which allows to accurately detect the position and direction of a minutia simultaneously. Our experiments support the view that using the suggested parabolic symmetry features, the extraction of which does not require explicit thinning or other morphological operations, constitute a robust alternative to conventional minutiae extraction. All necessary image processing is done in the spatial domain using 1-D filters only, avoiding block artifacts that reduce the biometric information. We present comparisons to other studies on enhancement in matching tasks employing the open source matcher from NIST, FIS2. Furthermore, we compare the proposed minutiae extraction method with the corresponding method from the NIST package, mindtct. A top five commercial matcher from FVC2006 is used in enhancement quantification as well. The matching error is lowered significantly when plugging in the suggested methods. The FVC2004 fingerprint database, notable for its exceptionally low-quality fingerprints, is used for all experiments.

  13. Surface Electromyography Feature Extraction Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Farzaneh Akhavan Mahdavi

    2012-12-01

    Full Text Available Considering the vast variety of EMG signal applications such as rehabilitation of people suffering from some mobility limitations, scientists have done much research on EMG control system. In this regard, feature extraction of EMG signal has been highly valued as a significant technique to extract the desired information of EMG signal and remove unnecessary parts. In this study, Wavelet Transform (WT has been applied as the main technique to extract Surface EMG (SEMG features because WT is consistent with the nature of EMG as a nonstationary signal. Furthermore, two evaluation criteria, namely, RES index (the ratio of a Euclidean distance to a standard deviation and scatter plot are recruited to investigate the efficiency of wavelet feature extraction. The results illustrated an improvement in class separability of hand movements in feature space. Accordingly, it has been shown that only the SEMG features extracted from first and second level of WT decomposition by second order of Daubechies family (db2 yielded the best class separability.

  14. THE IDENTIFICATION OF PILL USING FEATURE EXTRACTION IN IMAGE MINING

    Directory of Open Access Journals (Sweden)

    A. Hema

    2015-02-01

    Full Text Available With the help of image mining techniques, an automatic pill identification system was investigated in this study for matching the images of the pills based on its several features like imprint, color, size and shape. Image mining is an inter-disciplinary task requiring expertise from various fields such as computer vision, image retrieval, image matching and pattern recognition. Image mining is the method in which the unusual patterns are detected so that both hidden and useful data images can only be stored in large database. It involves two different approaches for image matching. This research presents a drug identification, registration, detection and matching, Text, color and shape extraction of the image with image mining concept to identify the legal and illegal pills with more accuracy. Initially, the preprocessing process is carried out using novel interpolation algorithm. The main aim of this interpolation algorithm is to reduce the artifacts, blurring and jagged edges introduced during up-sampling. Then the registration process is proposed with two modules they are, feature extraction and corner detection. In feature extraction the noisy high frequency edges are discarded and relevant high frequency edges are selected. The corner detection approach detects the high frequency pixels in the intersection points. Through the overall performance gets improved. There is a need of segregate the dataset into groups based on the query image’s size, shape, color, text, etc. That process of segregating required information is called as feature extraction. The feature extraction is done using Geometrical Gradient feature transformation. Finally, color and shape feature extraction were performed using color histogram and geometrical gradient vector. Simulation results shows that the proposed techniques provide accurate retrieval results both in terms of time and accuracy when compared to conventional approaches.

  15. Combining Multiple Feature Extraction Techniques for Handwritten Devnagari Character Recognition

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present an OCR for Handwritten Devnagari Characters. Basic symbols are recognized by neural classifier. We have used four feature extraction techniques namely, intersection, shadow feature, chain code histogram and straight line fitting features. Shadow features are computed globally for character image while intersection features, chain code histogram features and line fitting features are computed by dividing the character image into different segments. Weighted majority voting technique is used for combining the classification decision obtained from four Multi Layer Perceptron(MLP) based classifier. On experimentation with a dataset of 4900 samples the overall recognition rate observed is 92.80% as we considered top five choices results. This method is compared with other recent methods for Handwritten Devnagari Character Recognition and it has been observed that this approach has better success rate than other methods.

  16. The research on recognition and extraction of river feature in IKNOS based on frequency domain

    Science.gov (United States)

    Wang, Ke; Feng, Xuezhi; Xiao, Pengfeng; Wu, Guoping

    2009-10-01

    Because the resolution of remotely sensed imagery becomes higher, new methods are introduced to process the high-resolution remotely sensed imagery. The algorithms introduced in this paper to recognize and extract the river features based on the frequency domain. This paper uses the Gabor filter in frequency domain to enhance the texture of river and remove the noise from remotely sensed imagery. And then according to the theory of phase congruency, this paper retrieves the PC of every point such that features such as edge of river, building and farmland in the remotely sensed imagery. Lastly, the skeletal methodology is introduced to determine the edge of river within the help of the trend of river.

  17. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    , including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.

  18. Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms

    OpenAIRE

    Turroni, Francesco

    2012-01-01

    The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerp...

  19. Towards Home-Made Dictionaries for Musical Feature Extraction

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2003-01-01

    The majority of musical feature extraction applications are based on the Fourier transform in various disguises. This is despite the fact that this transform is subject to a series of restrictions, which admittedly ease the computation and interpretation of transform coefficients, but also imposes...... arguably unnecessary limitations on the ability of the transform to extract and identify features. However, replacing the nicely structured dictionary of the Fourier transform (or indeed other nice transform such as the wavelet transform) with a home-made dictionary is a dangerous task, since even the most...

  20. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  1. A MapReduce scheme for image feature extraction and its application to man-made object detection

    Science.gov (United States)

    Cai, Fei; Chen, Honghui

    2013-07-01

    A fundamental challenge in image engineering is how to locate interested objects from high-resolution images with efficient detection performance. Several man-made objects detection approaches have been proposed while the majority of these methods are not truly timesaving and suffer low degree of detection precision. To address this issue, we propose a novel approach for man-made object detection in aerial image involving MapReduce scheme for large scale image analysis to support image feature extraction, which can be widely used to compute-intensive tasks in a highly parallel way, and texture feature extraction and clustering. Comprehensive experiments show that the parallel framework saves voluminous time for feature extraction with satisfied objects detection performance.

  2. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  3. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  4. Feature extraction from multiple data sources using genetic programming.

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  5. Remote Sensing Image Feature Extracting Based Multiple Ant Colonies Cooperation

    Directory of Open Access Journals (Sweden)

    Zhang Zhi-long

    2014-02-01

    Full Text Available This paper presents a novel feature extraction method for remote sensing imagery based on the cooperation of multiple ant colonies. First, multiresolution expression of the input remote sensing imagery is created, and two different ant colonies are spread on different resolution images. The ant colony in the low-resolution image uses phase congruency as the inspiration information, whereas that in the high-resolution image uses gradient magnitude. The two ant colonies cooperate to detect features in the image by sharing the same pheromone matrix. Finally, the image features are extracted on the basis of the pheromone matrix threshold. Because a substantial amount of information in the input image is used as inspiration information of the ant colonies, the proposed method shows higher intelligence and acquires more complete and meaningful image features than those of other simple edge detectors.

  6. Face Feature Extraction for Recognition Using Radon Transform

    Directory of Open Access Journals (Sweden)

    Justice Kwame Appati

    2016-07-01

    Full Text Available Face recognition for some time now has been a challenging exercise especially when it comes to recognizing faces with different pose. This perhaps is due to the use of inappropriate descriptors during the feature extraction stage. In this paper, a thorough examination of the Radon Transform as a face signature descriptor was investigated on one of the standard database. The global features were rather considered by constructing a Gray Level Co-occurrences Matrices (GLCMs. Correlation, Energy, Homogeneity and Contrast are computed from each image to form the feature vector for recognition. We showed that, the transformed face signatures are robust and invariant to the different pose. With the statistical features extracted, face training classes are optimally broken up through the use of Support Vector Machine (SVM whiles recognition rate for test face images are computed based on the L1 norm.

  7. Geometric feature extraction by a multimarked point process.

    Science.gov (United States)

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency.

  8. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.

  9. Discriminative tonal feature extraction method in mandarin speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2007-01-01

    To utilize the supra-segmental nature of Mandarin tones, this article proposes a feature extraction method for hidden markov model (HMM) based tone modeling. The method uses linear transforms to project F0 (fundamental frequency) features of neighboring syllables as compensations, and adds them to the original F0 features of the current syllable. The transforms are discriminatively trained by using an objective function termed as "minimum tone error", which is a smooth approximation of tone recognition accuracy. Experiments show that the new tonal features achieve 3.82% tone recognition rate improvement, compared with the baseline, using maximum likelihood trained HMM on the normal F0 features. Further experiments show that discriminative HMM training on the new features is 8.78% better than the baseline.

  10. GFF-Ex: a genome feature extraction package

    OpenAIRE

    Rastogi, Achal; Gupta, Dinesh

    2014-01-01

    Background Genomic features of whole genome sequences emerging from various sequencing and annotation projects are represented and stored in several formats. Amongst these formats, the GFF (Generic/General Feature Format) has emerged as a widely accepted, portable and successfully used flat file format for genome annotation storage. With an increasing interest in genome annotation projects and secondary and meta-analysis, there is a need for efficient tools to extract sequences of interests f...

  11. Data Feature Extraction for High-Rate 3-Phase Data

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  12. Texture analysis of terrain images: exploiting directional properties of a local-feature statistics operator using small masks

    Science.gov (United States)

    Telfer, Duncan J.; Wilson, Lionel

    1996-06-01

    The texture discrimination properties of a directional operator, using a window based histogram correlation technique, are described for test images and natural scene imagery. The natural image examples are drawn from digitized monochrome aerial photographs of volcanic terrain, and of coastal dune fields in the region of West Lancashire, England. The operator masks use directionally encoded local difference parameters and are of two sizes, 3 by 3 and 2 by 2 pixels, scanning over a window size of 25 by 25 pixels to create an autocorrelation map, normalized to the output gray-scale range. Comparisons of the outputs are discussed in terms of ability to detect specific types of texture feature associated with known lava flow regimes demonstrate the viability of the technique down to single pixel resolution, particularly for folded or striated topography. The vegetation cover of the volcanic terrain is also characterized in terms of its response to these texture operators. The output from the dune field images, which contain a wider variety of vegetation cover, also demonstrates the discrimination capability of the technique and its potential usefulness in environmental and conservation management.

  13. TEXTURAL FRACTOGRAPHY

    Directory of Open Access Journals (Sweden)

    Hynek Lauschmann

    2011-05-01

    Full Text Available The reconstitution of the history of a fatigue process is based on the knowledge of any correspondences between the morphology of the crack surface and the velocity of the crack growth (crack growth rate - CGR. The textural fractography is oriented to mezoscopic SEM magnifications (30 to 500x. Images contain complicated textures without distinct borders. The aim is to find any characteristics of this texture, which correlate with CGR. Pre-processing of images is necessary to obtain a homogeneous texture. Three methods of textural analysis have been developed and realized as computational programs: the method based on the spectral structure of the image, the method based on a Gibbs random field (GRF model, and the method based on the idealization of light objects into a fibre process. In order to extract and analyze the fibre process, special methods - tracing fibres and a database-oriented analysis of a fibre process - have been developed.

  14. TOPOGRAPHIC FEATURE EXTRACTION FOR BENGALI AND HINDI CHARACTER IMAGES

    Directory of Open Access Journals (Sweden)

    Soumen Bag

    2011-06-01

    Full Text Available Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West. We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shapebased graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi character images. Initial results demonstrate the efficacy of our approach.

  15. Topographic Feature Extraction for Bengali and Hindi Character Images

    Directory of Open Access Journals (Sweden)

    Soumen Bag

    2011-09-01

    Full Text Available Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West. We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shapebased graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi character images. Initial results demonstrate the efficacy of our approach.

  16. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    Science.gov (United States)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  17. 基于灰度共生矩阵的纹理提取方法研究%STUDY ON GLCM-BASED TEXTURE EXTRACTION METHODS

    Institute of Scientific and Technical Information of China (English)

    任国贞; 江涛

    2014-01-01

    With the enhancement of spatial resolution of remote sensing image, texture features play increasingly important role in remote sensing image.In the paper we study nine texture descriptors including gray level co-occurrence matrix ( GLCM) homogeneity, contrast, dissimilarity, angular second moment (asm), correlation, entropy, and gray level difference vector (GLDV) contrast, mean, angular second moment, etc., and find that both GLCM-asm and GLDV-asm are the more effective ways for texture extraction of the high resolution remote sensing image.%随着遥感影像空间分辨率的提高,纹理特征在遥感影像中的作用越来越重要。对灰度共生矩阵的GLCM (灰度共生矩阵)同质性、GLCM对比度、GLCM相异性、GLCM角二阶距、GLCM相关性、GLCM熵、GLDV(归一化灰度)反差、GLDV均值、GLDV角二阶矩等九种纹理描述子进行研究,发现采用GLCM角二阶矩、GLDV角二阶矩这两种纹理描述子对高分辨率遥感影像的纹理提取具有较好效果。

  18. OPTIMIZED LOCAL TERNARY PATTERNS: A NEW TEXTURE MODEL WITH SET OF OPTIMAL PATTERNS FOR TEXTURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    G. Madasamy Raja

    2013-01-01

    Full Text Available Texture analysis is one of the important as well as useful tasks in image processing applications. Many texture models have been developed over the past few years and Local Binary Patterns (LBP is one of the simple and efficient approach among them. A number of extensions to the LBP method have been also presented but the problem remains challenging in feature vector generation and comparison. As textures are oriented and scaled differently, a texture model should effectively handle grey-scale variation, rotation variation, illumination variation and noise. The length of the feature vector in a texture model also plays an important role in deciding the time complexity of the texture analysis. This study proposes a new texture model, called Optimized Local Ternary Patterns (OLTP in the spatial methods of texture analysis. The proposed texture model is based on Local Ternary Patterns (LTP, which in turn is based on LBP. A new concept called “Level of Optimality” to select the optimal set of patterns is discussed in this study. This proposed texture model uses only optimal patterns to extract the textural information from the digital images and thereby reducing the length of the feature vector. This proposed model is robust to image rotation, grey-scale transformation, histogram equalization and noise. The results are compared with other widely used texture models by applying classification tests to variety of texture images from the standard Brodatz texture database. Experimental results prove that the proposed texture model is robust to grey-scale variation, image rotation, histogram equalization and noise. Experimental results also show that the proposed texture model improves the classification accuracy and the speed of the classification process. In all tested tasks, the proposed method outperforms the earlier methods.

  19. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  20. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon be

  1. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is te...

  2. Features extraction in anterior and posterior cruciate ligaments analysis.

    Science.gov (United States)

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK).

  3. METHOD TO EXTRACT BLEND SURFACE FEATURE IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    Lü Zhen; Ke Yinglin; Sun Qing; Kelvin W; Huang Xiaoping

    2003-01-01

    A new method of extraction of blend surface feature is presented. It contains two steps: segmentation and recovery of parametric representation of the blend. The segmentation separates the points in the blend region from the rest of the input point cloud with the processes of sampling point data, estimation of local surface curvature properties and comparison of maximum curvature values. The recovery of parametric representation generates a set of profile curves by marching throughout the blend and fitting cylinders. Compared with the existing approaches of blend surface feature extraction, the proposed method reduces the requirement of user interaction and is capable of extracting blend surface with either constant radius or variable radius. Application examples are presented to verify the proposed method.

  4. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Science.gov (United States)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  5. Gallium antimonide texturing for enhanced light extraction from infrared optoelectronics devices

    Science.gov (United States)

    Wassweiler, Ella; Toor, Fatima

    2016-06-01

    The use of gallium antimonide (GaSb) is increasing, especially for optoelectronic devices in the infrared wavelengths. It has been demonstrated in gallium nitride (GaN) devices operating at ultraviolet (UV) wavelengths, that surface textures increase the overall device efficiency. In this work, we fabricated eight different surface textures in GaSb to be used in enhancing efficiency in infrared wavelength devices. Through chemical etching with hydrofluoric acid, hydrogen peroxide, and tartaric acid we characterize the types of surface textures formed and the removal rate of entire layers of GaSb. Through optimization of the etching recipes we lower the reflectivity from 35.7% to 1% at 4 μm wavelength for bare and textured GaSb, respectively. In addition, we simulate surface textures using ray optics in finite element method solver software to provide explanation of our experimental findings.

  6. Gallium antimonide texturing for enhanced light extraction from infrared optoelectronics devices

    Directory of Open Access Journals (Sweden)

    Ella Wassweiler

    2016-06-01

    Full Text Available The use of gallium antimonide (GaSb is increasing, especially for optoelectronic devices in the infrared wavelengths. It has been demonstrated in gallium nitride (GaN devices operating at ultraviolet (UV wavelengths, that surface textures increase the overall device efficiency. In this work, we fabricated eight different surface textures in GaSb to be used in enhancing efficiency in infrared wavelength devices. Through chemical etching with hydrofluoric acid, hydrogen peroxide, and tartaric acid we characterize the types of surface textures formed and the removal rate of entire layers of GaSb. Through optimization of the etching recipes we lower the reflectivity from 35.7% to 1% at 4 μm wavelength for bare and textured GaSb, respectively. In addition, we simulate surface textures using ray optics in finite element method solver software to provide explanation of our experimental findings.

  7. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  8. Feature extraction from slice data for reverse engineering

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yingjie; LU Shangning

    2007-01-01

    A new approach to feature extraction for slice data points is presented. The reconstruction of objects is performed as follows. First, all contours in each slice are extracted by contour tracing algorithms. Then the data points on the contours are analyzed, and the curve segments of the contours are divided into three categories: straight lines, conic curves and B-spline curves. The curve fitting methods are applied for each curve segment to remove the unwanted points with pre-determined tolerance. Finally, the features, which consist of the object and connection relations among them, are founded by matching the corresponding contours in adjacent slices, and 3D models are reconstructed based on the features. The proposed approach has been implemented in OpenGL, and the feasibility of the proposed method has been verified by several cases.

  9. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... the complexity of hand-crafting feature extractors that combine information across dissimilar modalities of input. Frequent sequence mining is presented as a method to learn feature extractors that fuse physiological and contextual information. This method is evaluated in a game-based dataset and compared...

  10. Features Extraction for Object Detection Based on Interest Point

    Directory of Open Access Journals (Sweden)

    Amin Mohamed Ahsan

    2013-05-01

    Full Text Available In computer vision, object detection is an essential process for further processes such as object tracking, analyzing and so on. In the same context, extraction features play important role to detect the object correctly. In this paper we present a method to extract local features based on interest point which is used to detect key-points within an image, then, compute histogram of gradient (HOG for the region surround that point. Proposed method used speed-up robust feature (SURF method as interest point detector and exclude the descriptor. The new descriptor is computed by using HOG method. The proposed method got advantages of both mentioned methods. To evaluate the proposed method, we used well-known dataset which is Caltech101. The initial result is encouraging in spite of using a small data for training.

  11. Fruits and vegetables recognition based on color and texture features%基于颜色及纹理特征的果蔬种类识别方法

    Institute of Scientific and Technical Information of China (English)

    陶华伟; 赵力; 奚吉; 虞玲; 王彤

    2014-01-01

    An intelligent fruit and vegetable recognition system utilizing image recognition can accurately and rapidly indentify different kinds of fruits and vegetables, which can improve supermarket and market sales efficiency. The feature extracting method is a very important issue in an intelligent fruit and vegetable recognition system. However, traditional fruit and vegetable recognition algorithms either ignore the texture feature of fruits and vegetables, or used texture features that couldn’t better represent the texture of fruit and vegetable images. In order to represent the texture feature of fruit and vegetable images better and improve the intelligent fruit and vegetable recognition system recognition rate, we proposed a novel texture feature extraction algorithms called color completed local binary pattern (CCLBP) in this paper. By extracting different kinds of color channels completed by a local binary pattern (CLBP) texture feature, the CCLBP constructed a new texture feature extraction algorithm. The Fruit and vegetable recognition system model uses CCLBP to extract an image texture feature, and uses a HSV color histogram and Border/interior pixel classification (BIC) color histogram to extract image color features. Then it uses a matching score fusion algorithm to fuse color and texture features, and finally, a nearest neighbor (NN) classifier is used to realize fruit and vegetable recognition. To verify the effectiveness of the algorithms, two different fruit and vegetable databases, called an interior database and an outdoor database, were constructed in this paper. The interior database acquired in a laboratory contains 13 kinds of fruits and vegetables, which is used to verify algorithms recognition performance under different kinds of illumination. The outdoor database acquired in the market contains 47 kinds of fruits and vegetables, which is used to verify algorithms recognition performance under a different number of training sets. A Fruit and

  12. Predicting Ki67% expression from DCE-MR images of breast tumors using textural kinetic features in tumor habitats

    Science.gov (United States)

    Chaudhury, Baishali; Zhou, Mu; Farhidzadeh, Hamidreza; Goldgof, Dmitry B.; Hall, Lawrence O.; Gatenby, Robert A.; Gillies, Robert J.; Weinfurtner, Robert J.; Drukteinis, Jennifer S.

    2016-03-01

    The use of Ki67% expression, a cell proliferation marker, as a predictive and prognostic factor has been widely studied in the literature. Yet its usefulness is limited due to inconsistent cut off scores for Ki67% expression, subjective differences in its assessment in various studies, and spatial variation in expression, which makes it difficult to reproduce as a reliable independent prognostic factor. Previous studies have shown that there are significant spatial variations in Ki67% expression, which may limit its clinical prognostic utility after core biopsy. These variations are most evident when examining the periphery of the tumor vs. the core. To date, prediction of Ki67% expression from quantitative image analysis of DCE-MRI is very limited. This work presents a novel computer aided diagnosis framework to use textural kinetics to (i) predict the ratio of periphery Ki67% expression to core Ki67% expression, and (ii) predict Ki67% expression from individual tumor habitats. The pilot cohort consists of T1 weighted fat saturated DCE-MR images from 17 patients. Support vector regression with a radial basis function was used for predicting the Ki67% expression and ratios. The initial results show that texture features from individual tumor habitats are more predictive of the Ki67% expression ratio and spatial Ki67% expression than features from the whole tumor. The Ki67% expression ratio could be predicted with a root mean square error (RMSE) of 1.67%. Quantitative image analysis of DCE-MRI using textural kinetic habitats, has the potential to be used as a non-invasive method for predicting Ki67 percentage and ratio, thus more accurately reporting high KI-67 expression for patient prognosis.

  13. Research on Transmission Line Recognition Based on GLCM Texture Feature%基于灰度共生矩阵纹理特征的输电导线识别

    Institute of Scientific and Technical Information of China (English)

    赵建坤; 王璋奇; 刘世钊

    2015-01-01

    Analyzed the characteristics of the transmission line images, using the gray-level co-occurrence matrix texture features of the image, studied the transmission line recognition method.Firstly, analyzed the GLCM texture features of transmission image by selecting the appropriate window size, and then selected the higher resolution characteristics to extract line.Secondly, traversed the entire image to extract line using the above texture features.Finally, removed irrelevant image pixels using the image segmentation technique and fitted the complete transmission line by the least squares method.%分析了输电线路图像的特点,利用图像的灰度共生矩阵纹理特征,对导线的提取进行了研究。首先,选取合适的窗口大小分析输电图像的灰度共生矩阵纹理特征,选取分辨率较高的特征进行导线提取;其次,利用上述纹理特征遍历整幅图像提取导线;最后,利用图像区域分割技术去除图像中非导线区域的像素点,并且利用最小二乘法拟合得到完整的输电导线。

  14. Comparative Study of a Novel Tool for Follicular Unit Extraction for Individuals with Afro-textured Hair

    Science.gov (United States)

    2016-01-01

    Background: Hair transplantation involving patients with tightly curled Afro-textured hair using follicular unit extraction (FUE) employing conventional rotary punches frequently leads to unacceptably high transection rates. These patients are unsuitable candidates for FUE hair transplantation. Transection rates were observed during FUE in a case series of 18 patients with tightly curled Afro-textured hair using different punches. Methods: Three different punches were sequentially used in patients to extract follicular units with several needle gauges until satisfactory transection rates occurred: conventional sharp and dull rotary punches, followed by a 2-pronged curved nonrotary punch. Results: In all instances, the curved nonrotary punch had the best transection rate of hair can overcome high transection rates experienced using conventional sharp or dull rotary punches. Limitations of this study include it being a small, retrospective case series, and that the new technique that could require additional training by current FUE hair transplant practitioners.

  15. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  16. Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery.

    Directory of Open Access Journals (Sweden)

    Ahmad Chaddad

    Full Text Available This paper proposes to characterize the continuum of colorectal cancer (CRC using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma.In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG filter, discrete wavelets (DW and gray level co-occurrence matrices (GLCM. To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models.Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01. Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%.These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images.

  17. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    Science.gov (United States)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  18. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  19. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  20. Visual texture for automated characterisation of geological features in borehole televiewer imagery

    Science.gov (United States)

    Al-Sit, Waleed; Al-Nuaimy, Waleed; Marelli, Matteo; Al-Ataby, Ali

    2015-08-01

    Detailed characterisation of the structure of subsurface fractures is greatly facilitated by digital borehole logging instruments, the interpretation of which is typically time-consuming and labour-intensive. Despite recent advances towards autonomy and automation, the final interpretation remains heavily dependent on the skill, experience, alertness and consistency of a human operator. Existing computational tools fail to detect layers between rocks that do not exhibit distinct fracture boundaries, and often struggle characterising cross-cutting layers and partial fractures. This paper presents a novel approach to the characterisation of planar rock discontinuities from digital images of borehole logs. Multi-resolution texture segmentation and pattern recognition techniques utilising Gabor filters are combined with an iterative adaptation of the Hough transform to enable non-distinct, partial, distorted and steep fractures and layers to be accurately identified and characterised in a fully automated fashion. This approach has successfully detected fractures and layers with high detection accuracy and at a relatively low computational cost.

  1. A Narrative Methodology to Recognize Iris Patterns By Extracting Features Using Gabor Filters and Wavelets

    Directory of Open Access Journals (Sweden)

    Shristi Jha

    2016-01-01

    Full Text Available Iris pattern Recognition is an automated method of biometric identification that uses mathematical pattern-Recognition techniques on images of one or both of the irises of an individual’s eyes, whose complex random patterns are unique, stable, and can be seen from some distance. Iris recognition uses video camera technology with subtle near infrared illumination to acquire images of the detail-rich, intricate structures of the iris which are visible externally. In this narrative research paper the input image is captured and the success of the iris recognition depends on the quality of the image so the captured image is subjected to the preliminary image preprocessing techniques like localization, segmentation, normalization and noise detection followed by texture and edge feature extraction by using Gabor filters and wavelets then the processed image is matched with templates stored in the database to detect the Iris Patterns.

  2. FACE RECOGNITION USING FEATURE EXTRACTION AND NEURO-FUZZY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Ritesh Vyas

    2012-09-01

    Full Text Available Face is a primary focus of attention in social intercourse, playing a major role in conveying identity and emotion. The human ability to recognize faces is remarkable. People can recognize thousands of faces learned throughout their lifetime and identify familiar faces at a glance even after years of separation. This skill is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses, beards or changes in hair style. In this work, a system is designed to recognize human faces depending on their facial features. Also to reveal the outline of the face, eyes and nose, edge detection technique has been used. Facial features are extracted in the form of distance between important feature points. After normalization, these feature vectors are learned by artificial neural network and used to recognize facial image.

  3. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  4. Comparative study of shape, intensity and texture features and support vector machine for white blood cell classification

    Directory of Open Access Journals (Sweden)

    Mehdi Habibzadeh

    2013-04-01

    Full Text Available The complete blood count (CBC is widely used test for counting and categorizing various peripheral particles in the blood. The main goal of the paper is to count and classify white blood cells (leukocytes in microscopic images into five major categories using features such as shape, intensity and texture features. The first critical step of counting and classification procedure involves segmentation of individual cells in cytological images of thin blood smears. The quality of segmentation has significant impact on the cell type identification, but poor quality, noise, and/or low resolution images make segmentation less reliable. We analyze the performance of our system for three different sets of features and we determine that the best performance is achieved by wavelet features using the Dual-Tree Complex Wavelet Transform (DT-CWT which is based on multi-resolution characteristics of the image. These features are combined with the Support Vector Machine (SVM which classifies white blood cells into their five primary types. This approach was validated with experiments conducted on digital normal blood smear images with low resolution.

  5. Optimized Feature Extraction for Temperature-Modulated Gas Sensors

    Directory of Open Access Journals (Sweden)

    Alexander Vergara

    2009-01-01

    Full Text Available One of the most serious limitations to the practical utilization of solid-state gas sensors is the drift of their signal. Even if drift is rooted in the chemical and physical processes occurring in the sensor, improved signal processing is generally considered as a methodology to increase sensors stability. Several studies evidenced the augmented stability of time variable signals elicited by the modulation of either the gas concentration or the operating temperature. Furthermore, when time-variable signals are used, the extraction of features can be accomplished in shorter time with respect to the time necessary to calculate the usual features defined in steady-state conditions. In this paper, we discuss the stability properties of distinct dynamic features using an array of metal oxide semiconductors gas sensors whose working temperature is modulated with optimized multisinusoidal signals. Experiments were aimed at measuring the dispersion of sensors features in repeated sequences of a limited number of experimental conditions. Results evidenced that the features extracted during the temperature modulation reduce the multidimensional data dispersion among repeated measurements. In particular, the Energy Signal Vector provided an almost constant classification rate along the time with respect to the temperature modulation.

  6. Gradient Algorithm on Stiefel Manifold and Application in Feature Extraction

    Directory of Open Access Journals (Sweden)

    Zhang Jian-jun

    2013-09-01

    Full Text Available To improve the computational efficiency of system feature extraction, reduce the occupied memory space, and simplify the program design, a modified gradient descent method on Stiefel manifold is proposed based on the optimization algorithm of geometry frame on the Riemann manifold. Different geodesic calculation formulas are used for different scenarios. A polynomial is also used to lie close to the geodesic equations. JiuZhaoQin-Horner polynomial algorithm and the strategies of line-searching technique and change of the step size of iteration are also adopted. The gradient descent algorithm on Stiefel manifold applied in Principal Component Analysis (PCA is discussed in detail as an example of system feature extraction. Theoretical analysis and simulation experiments show that the new method can achieve superior performance in both the convergence rate and calculation efficiency while ensuring the unitary column orthogonality. In addition, it is easier to implement by software or hardware.

  7. A Review on Feature Extraction Techniques in Face Recognition

    Directory of Open Access Journals (Sweden)

    Rahimeh Rouhi

    2013-01-01

    Full Text Available Face recognition systems due to their significant application in the security scopes, have been of greatimportance in recent years. The existence of an exact balance between the computing cost, robustness andtheir ability for face recognition is an important characteristic for such systems. Besides, trying to designthe systems performing under different conditions (e.g. illumination, variation of pose, different expressionand etc. is a challenging problem in the feature extraction of the face recognition. As feature extraction isan important step in the face recognition operation, in the present study four techniques of featureextraction in the face recognition were reviewed, subsequently comparable results were presented, andthen the advantages and the disadvantages of these methods were discussed.

  8. Modification of evidence theory based on feature extraction

    Institute of Scientific and Technical Information of China (English)

    DU Feng; SHI Wen-kang; DENG Yong

    2005-01-01

    Although evidence theory has been widely used in information fusion due to its effectiveness of uncertainty reasoning, the classical DS evidence theory involves counter-intuitive behaviors when high conflict information exists. Many modification methods have been developed which can be classified into the following two kinds of ideas, either modifying the combination rules or modifying the evidence sources. In order to make the modification more reasonable and more effective, this paper gives a thorough analysis of some typical existing modification methods firstly, and then extracts the intrinsic feature of the evidence sources by using evidence distance theory. Based on the extracted features, two modified plans of evidence theory according to the corresponding modification ideas have been proposed. The results of numerical examples prove the good performance of the plans when combining evidence sources with high conflict information.

  9. FEATURES AND GROUND AUTOMATIC EXTRACTION FROM AIRBORNE LIDAR DATA

    OpenAIRE

    D. Costantino; M. G. Angelini

    2012-01-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and l...

  10. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  11. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  12. Eddy current pulsed phase thermography and feature extraction

    Science.gov (United States)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  13. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    Science.gov (United States)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  14. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial and Textureal Information Extracted from Ikonos Imagery

    Science.gov (United States)

    Wang, H.; Zhao, Y.; Pu, R.; Zhang, Z.

    2016-10-01

    In this study grey-level co-occurrence matrix (GLCM) textures and a local statistical analysis Getis statistic (Gi), computed from IKONOS multispectral (MS) imagery acquired from the Yellow River Delta in China, along with a random forest (RF) classifier, were used to discriminate Robina pseudoacacia tree health levels. The different RF classification results of the three forest health conditions were created: (1) an overall accuracy (OA) of 79.5% produced using the four MS band reflectances only; (2) an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3) an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen's neighborhood rule; and (4) an OA of 96.9% created with the combined 16 spectral (four), spatial (four), and textural (eight) features. The experimental results demonstrate that (a) both textural and spatial information was more useful than spectral information in determining the Robina pseudoacacia forest health conditions; and (b) IKONOS NIR band was more powerful than visible bands in quantifying varying degree of forest crown dieback.

  15. Features and Ground Automatic Extraction from Airborne LIDAR Data

    Science.gov (United States)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  16. Automated feature extraction for 3-dimensional point clouds

    Science.gov (United States)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  17. Feature Extraction and Pattern Identification for Anemometer Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Longji Sun

    2012-01-01

    Full Text Available Cup anemometers are commonly used for wind speed measurement in the wind industry. Anemometer malfunctions lead to excessive errors in measurement and directly influence the wind energy development for a proposed wind farm site. This paper is focused on feature extraction and pattern identification to solve the anemometer condition diagnosis problem of the PHM 2011 Data Challenge Competition. Since the accuracy of anemometers can be severely affected by the environmental factors such as icing and the tubular tower itself, in order to distinguish the cause due to anemometer failures from these factors, our methodologies start with eliminating irregular data (outliers under the influence of environmental factors. For paired data, the relation between the relative wind speed difference and the wind direction is extracted as an important feature to reflect normal or abnormal behaviors of paired anemometers. Decisions regarding the condition of paired anemometers are made by comparing the features extracted from training and test data. For shear data, a power law model is fitted using the preprocessed and normalized data, and the sum of the squared residuals (SSR is used to measure the health of an array of anemometers. Decisions are made by comparing the SSRs of training and test data. The performance of our proposed methods is evaluated through the competition website. As a final result, our team ranked the second place overall in both student and professional categories in this competition.

  18. New insights into alkylammonium-functionalized clinoptilolite and Na-P1 zeolite: Structural and textural features

    Science.gov (United States)

    Muir, Barbara; Matusik, Jakub; Bajda, Tomasz

    2016-01-01

    The area of zeolites' application could be expanded by utilizing their surfaces. Zeolites are frequently modified to increase their hydrophobicity and to generate the negative charge of the surface. The main objective of the study was to investigate and compare the features of natural clinoptilolite and synthetic zeolite Na-P1 modified by selected surfactants involving quaternary ammonium salts. The FTIR study indicates that with increasing carbon chain length in the surfactant attached to the zeolites surface the molecules adopt a more disordered structure. FTIR was also used to determine the efficiency of surface modification. Thermal analysis revealed that the presence of surfactant results in additional exothermic effects associated with the breaking of electrostatic bonds between zeolites and surfactants. The mass losses are in line with ECEC and CHN data. The textural study indicates that the synthetic zeolite Na-P1 has better sorption properties than natural clinoptilolite. The modification process always reduces the SBET and porosity of the material. With an increasing carbon chain length of surfactants all the texture parameters decrease.

  19. A CAD system for cerebral glioma based on texture features in DT-MR images

    Energy Technology Data Exchange (ETDEWEB)

    De Nunzio, G., E-mail: giorgio.denunzio@unisalento.it [Dept. of Materials Science, University of Salento, Via Monteroni, 73100 Lecce (Italy); Pastore, G. [PO ' Vito Fazzi' , UOC Fisica Sanitaria, Lecce (Italy); Donativi, M. [Dept. of Materials Science, University of Salento, Via Monteroni, 73100 Lecce (Italy); Castellano, A.; Falini, A. [Neuroradiology Unit and CERMAC Scientific Institute and University Vita-Salute San Raffaele, Milan (Italy)

    2011-08-21

    Tumor cells in cerebral glioma invade the surrounding tissues preferentially along white-matter tracts, spreading beyond the abnormal area seen on conventional MR images. Diffusion Tensor Imaging can reveal large peritumoral abnormalities in gliomas, which are not apparent on MRI. Our aim was to characterize pathological vs. healthy tissue in DTI datasets by 3D statistical Texture Analysis, developing an automatic segmentation technique (CAD, Computer Assisted Detection) for cerebral glioma based on a supervised classifier (an artificial neural network). A Matlab GUI (Graphical User Interface) was created to help the physician in the assisted diagnosis process and to optimize interactivity with the segmentation system, especially for patient follow-up during chemotherapy, and for preoperative assessment of tumor extension. Preliminary tissue classification results were obtained for the p map (the calculated area under the ROC curve, AUC, was 0.96) and the FA map (AUC=0.98). Test images were automatically segmented by tissue classification; manual and automatic segmentations were compared, showing good concordance.

  20. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  1. Correlative Feature Analysis for Multimodality Breast CAD

    Science.gov (United States)

    2009-09-01

    developed two sets of “large-scale” features. Firstly, we extracted a set of texture features based on a gray-level co-occurrence matrix ( GLCM ). For...each region, four GLCMs were constructed along four different directions of 0°, 45°, 90° and 135°. Assuming that there is no directional texture...features in mammograms, a non-directional GLCM was obtained by summing all the directional GLCMs . Texture features were then computed from each non

  2. A Novel Feature Extraction for Robust EMG Pattern Recognition

    CERN Document Server

    Phinyomark, Angkoon; Phukpattaranont, Pornchai

    2009-01-01

    Varieties of noises are major problem in recognition of Electromyography (EMG) signal. Hence, methods to remove noise become most significant in EMG signal analysis. White Gaussian noise (WGN) is used to represent interference in this paper. Generally, WGN is difficult to be removed using typical filtering and solutions to remove WGN are limited. In addition, noise removal is an important step before performing feature extraction, which is used in EMG-based recognition. This research is aimed to present a novel feature that tolerate with WGN. As a result, noise removal algorithm is not needed. Two novel mean and median frequencies (MMNF and MMDF) are presented for robust feature extraction. Sixteen existing features and two novelties are evaluated in a noisy environment. WGN with various signal-to-noise ratios (SNRs), i.e. 20-0 dB, was added to the original EMG signal. The results showed that MMNF performed very well especially in weak EMG signal compared with others. The error of MMNF in weak EMG signal with...

  3. Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Wei-Cheng Li; Chung-Lin Huang; Pei-Yeh Chang

    2014-01-01

    In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant’s cranium can be identified by local feature analysis and a two-phase k-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress.

  4. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  5. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2014-06-01

    Full Text Available User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user’s location (sensitivity and its capacity to detect false positives (specificity in both scenarios.

  6. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    Science.gov (United States)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  7. Modeling the Relationship between Texture Semantics and Textile Images

    Directory of Open Access Journals (Sweden)

    Xiaohui Wang

    2011-09-01

    Full Text Available Texture semantics, which is the kind of feelings that the texture feature of an image would arouse in people, is important in texture analysis. In this paper, we study the relationship between texture semantics and textile images, and propose a novel parametric mapping model to predict texture semantics from textile images. To represent rich texture semantics and enable it to participate in computation, 2D continuous semantic space, where the axes correspond to hard-soft and warm-cool, is first adopted to quantitatively describe texture semantics. Then texture features of textile images are extracted using Gabor decomposition. Finally, the mapping model between texture features and texture semantics in the semantic space is built using three different methods: linear regression, k-nearest neighbor (KNN and multi-layered perceptron (MLP. The performance of the proposed mapping model is evaluated with a dataset of 1352 textile images. The results confirm that the mapping model is effective and especially KNN and MLP reach the good performance. We further apply the mapping model to two applications: automatic textile image annotation with texture semantics and textile image search based on texture semantics. The subjective experimental results are consistent with human perception, which verifies the effectiveness of the proposed mapping model. The proposed model and its applications can be applied to various automation systems in commercial textile industry.

  8. Detection of corn and weed species by the combination of spectral, shape and textural features

    Science.gov (United States)

    Accurate detection of weeds in farmland can help reduce pesticide use and protect the agricultural environment. To develop intelligent equipment for weed detection, this study used an imaging spectrometer system, which supports micro-scale plant feature analysis by acquiring high-resolution hyper sp...

  9. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  10. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-08-09

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Entropy Analysis as an Electroencephalogram Feature Extraction Method

    Directory of Open Access Journals (Sweden)

    P. I. Sotnikov

    2014-01-01

    Full Text Available The aim of this study was to evaluate a possibility for using an entropy analysis as an electroencephalogram (EEG feature extraction method in brain-computer interfaces (BCI. The first section of the article describes the proposed algorithm based on the characteristic features calculation using the Shannon entropy analysis. The second section discusses issues of the classifier development for the EEG records. We use a support vector machine (SVM as a classifier. The third section describes the test data. Further, we estimate an efficiency of the considered feature extraction method to compare it with a number of other methods. These methods include: evaluation of signal variance; estimation of spectral power density (PSD; estimation of autoregression model parameters; signal analysis using the continuous wavelet transform; construction of common spatial pattern (CSP filter. As a measure of efficiency we use the probability value of correctly recognized types of imagery movements. At the last stage we evaluate the impact of EEG signal preprocessing methods on the final classification accuracy. Finally, it concludes that the entropy analysis has good prospects in BCI applications.

  12. Data Clustering Analysis Based on Wavelet Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    QIANYuntao; TANGYuanyan

    2003-01-01

    A novel wavelet-based data clustering method is presented in this paper, which includes wavelet feature extraction and cluster growing algorithm. Wavelet transform can provide rich and diversified information for representing the global and local inherent structures of dataset. therefore, it is a very powerful tool for clustering feature extraction. As an unsupervised classification, the target of clustering analysis is dependent on the specific clustering criteria. Several criteria that should be con-sidered for general-purpose clustering algorithm are pro-posed. And the cluster growing algorithm is also con-structed to connect clustering criteria with wavelet fea-tures. Compared with other popular clustering methods,our clustering approach provides multi-resolution cluster-ing results,needs few prior parameters, correctly deals with irregularly shaped clusters, and is insensitive to noises and outliers. As this wavelet-based clustering method isaimed at solving two-dimensional data clustering prob-lem, for high-dimensional datasets, self-organizing mapand U-matrlx method are applied to transform them intotwo-dimensional Euclidean space, so that high-dimensional data clustering analysis,Results on some sim-ulated data and standard test data are reported to illus-trate the power of our method.

  13. A Novel Feature Cloud Visualization for Depiction of Product Features Extracted from Customer Reviews

    Directory of Open Access Journals (Sweden)

    Tanvir Ahmad

    2013-09-01

    Full Text Available There has been an exponential growth of web content on the World Wide Web and online users contributing to majority of the unstructured data which also contain a good amount of information on many different subjects that may range from products, news, programmes and services. Many a times other users read these reviews and try to find the meaning of the sentences expressed by the reviewers. Since the number and the length of the reviews are so large that most the times the user will read a few reviews and would like to take an informed decision on the subject that is being talked about. Many different methods have been adopted by websites like numerical rating, star rating, percentage rating etc. However, these methods fail to give information on the explicit features of the product and their overall weight when taking the product in totality. In this paper, a framework has been presented which first calculates the weight of the features depending on the user satisfaction or dissatisfaction expressed on individual features and further a feature cloud visualization has been proposed which uses two level of specificity where the first level lists the extracted features and the second level shows the opinions on those features. A font generation function has been applied which calculates the font size depending on the importance of the features vis-a-vis with the opinion expressed on them.

  14. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  15. Features extraction from the electrocatalytic gas sensor responses

    Science.gov (United States)

    Kalinowski, Paweł; Woźniak, Łukasz; Stachowiak, Maria; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    One of the types of gas sensors used for detection and identification of toxic-air pollutant is an electro-catalytic gas sensor. The electro-catalytic sensors are working in cyclic voltammetry mode, enable detection of various gases. Their response are in the form of I-V curves which contain information about the type and the concentration of measured volatile compound. However, additional analysis is required to provide the efficient recognition of the target gas. Multivariate data analysis and pattern recognition methods are proven to be useful tool for such application, but further investigations on the improvement of the sensor's responses processing are required. In this article the method for extraction of the parameters from the electro-catalytic sensor responses is presented. Extracted features enable the significant reduction of data dimension without the loss of the efficiency of recognition of four volatile air-pollutant, namely nitrogen dioxide, ammonia, hydrogen sulfide and sulfur dioxide.

  16. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  17. Extract relevant features from DEM for groundwater potential mapping

    Science.gov (United States)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  18. Feature Extraction from Subband Brain Signals and Its Classification

    Science.gov (United States)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  19. Feature Extraction and Analysis of Breast Cancer Specimen

    Science.gov (United States)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  20. Point features extraction: towards slam for an autonomous underwater vehicle

    CSIR Research Space (South Africa)

    Matsebe, O

    2010-07-01

    Full Text Available Page 1 of 11 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa POINT FEATURES EXTRACTION: TOWARDS SLAM FOR AN AUTONOMOUS UNDERWATER VEHICLE O. Matsebe1,2, M... Page 2 of 11 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa vehicle is equipped with a Mechanically Scanned Imaging Sonar (Micron DST Sonar) which is able...

  1. Ensemble Feature Extraction Modules for Improved Hindi Speech Recognition System

    Directory of Open Access Journals (Sweden)

    Malay Kumar

    2012-05-01

    Full Text Available Speech is the most natural way of communication between human beings. The field of speech recognition generates intrigues of man - machine conversation and due to its versatile applications; automatic speech recognition systems have been designed. In this paper we are presenting a novel approach for Hindi speech recognition by ensemble feature extraction modules of ASR systems and their outputs have been combined using voting technique ROVER. Experimental results have been shown that proposed system will produce better result than traditional ASR systems.

  2. Unsupervised clustering analyses of features extraction for a caries computer-assisted diagnosis using dental fluorescence images

    Science.gov (United States)

    Bessani, Michel; da Costa, Mardoqueu M.; Lins, Emery C. C. C.; Maciel, Carlos D.

    2014-02-01

    Computer-assisted diagnoses (CAD) are performed by systems with embedded knowledge. These systems work as a second opinion to the physician and use patient data to infer diagnoses for health problems. Caries is the most common oral disease and directly affects both individuals and the society. Here we propose the use of dental fluorescence images as input of a caries computer-assisted diagnosis. We use texture descriptors together with statistical pattern recognition techniques to measure the descriptors performance for the caries classification task. The data set consists of 64 fluorescence images of in vitro healthy and carious teeth including different surfaces and lesions already diagnosed by an expert. The texture feature extraction was performed on fluorescence images using RGB and YCbCr color spaces, which generated 35 different descriptors for each sample. Principal components analysis was performed for the data interpretation and dimensionality reduction. Finally, unsupervised clustering was employed for the analysis of the relation between the output labeling and the diagnosis of the expert. The PCA result showed a high correlation between the extracted features; seven components were sufficient to represent 91.9% of the original feature vectors information. The unsupervised clustering output was compared with the expert classification resulting in an accuracy of 96.88%. The results show the high accuracy of the proposed approach in identifying carious and non-carious teeth. Therefore, the development of a CAD system for caries using such an approach appears to be promising.

  3. New learning subspace method for image feature extraction

    Institute of Scientific and Technical Information of China (English)

    CAO Jian-hai; LI Long; LU Chang-hou

    2006-01-01

    A new method of Windows Minimum/Maximum Module Learning Subspace Algorithm(WMMLSA) for image feature extraction is presented. The WMMLSM is insensitive to the order of the training samples and can regulate effectively the radical vectors of an image feature subspace through selecting the study samples for subspace iterative learning algorithm,so it can improve the robustness and generalization capacity of a pattern subspace and enhance the recognition rate of a classifier. At the same time,a pattern subspace is built by the PCA method. The classifier based on WMMLSM is successfully applied to recognize the pressed characters on the gray-scale images. The results indicate that the correct recognition rate on WMMLSM is higher than that on Average Learning Subspace Method,and that the training speed and the classification speed are both improved. The new method is more applicable and efficient.

  4. Reaction Decoder Tool (RDT): extracting features from chemical reactions

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W.; Holliday, Gemma L.; Steinbeck, Christoph; Thornton, Janet M.

    2016-01-01

    Summary: Extracting chemical features like Atom–Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. Availability and implementation: This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder Contact: asad@ebi.ac.uk or s9asad@gmail.com PMID:27153692

  5. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  6. Graph-driven features extraction from microarray data

    CERN Document Server

    Vert, J P; Vert, Jean-Philippe; Kanehisa, Minoru

    2002-01-01

    Gene function prediction from microarray data is a first step toward better understanding the machinery of the cell from relatively cheap and easy-to-produce data. In this paper we investigate whether the knowledge of many metabolic pathways and their catalyzing enzymes accumulated over the years can help improve the performance of classifiers for this problem. The complex network of known biochemical reactions in the cell results in a representation where genes are nodes of a graph. Formulating the problem as a graph-driven features extraction problem, based on the simple idea that relevant features are likely to exhibit correlation with respect to the topology of the graph, we end up with an algorithm which involves encoding the network and the set of expression profiles into kernel functions, and performing a regularized form of canonical correlation analysis in the corresponding reproducible kernel Hilbert spaces. Function prediction experiments for the genes of the yeast S. Cerevisiae validate this appro...

  7. Road marking features extraction using the VIAPIX® system

    Science.gov (United States)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  8. Hyperspectral image classification based on volumetric texture and dimensionality reduction

    Science.gov (United States)

    Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui

    2015-06-01

    A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.

  9. 基于关键帧颜色和纹理特征的视频拷贝检测%Video Copy Detection Method Based on Color and Texture Features of Key Frames

    Institute of Scientific and Technical Information of China (English)

    陈秀新; 贾克斌; 魏世昂

    2012-01-01

    提出了一种基于关键帧颜色和纹理特征的视频拷贝检测方法.首先通过子片段方法提取视频的关键帧,然后将关键帧分成3个子块,提取每个子块的三维量化颜色直方图,通过直方图相交法来进行颜色特征的匹配.对检索得到的结果视频关键帧进行纹理特征提取,通过其灰度共生矩阵的角二阶矩和熵来表征其纹理特征,纹理特征的匹配可进一步过滤不相关的视频.实验结果表明,该方法效果好、稳健性强且可应用于多种类型的视频.%A video copy detection method based on color and texture features of key frames is proposed Firstly, key frames are extracted based on clip-method. Then, key frames are divided into 3 blocks and three-dimensional quantized color histograms are extracted from the blocks. Color matching is based on histogram intersection. Texture features are further extracted from the key frames of the resulting videos and are represented with angular second moment and entropy of the co-occurrence matrix. With the matching of texture features, more irrelative videos are filtered. Experiments show that this method is effective, high robust and can be used for various types of videos.

  10. Textural Feature Selection for Enhanced Detection of Stationary Humans in Through the Wall Radar Imagery

    Science.gov (United States)

    2014-05-02

    b) (c) Figure 2. (a) Through-the-wall MIMO System. (b) Building used for Through-the-Wall Measurements (the dashed square indicates the...number of training samples at the parent node t and iQ is the number of training samples associated with the child node iv . The Gini index is a...also be determined by using the weighted average of the Gini index. The training samples are first sorted based on the values they take for the feature

  11. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT

    Directory of Open Access Journals (Sweden)

    Daniel Markel

    2013-01-01

    Full Text Available Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET. First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT with K-nearest neighbours (KNN classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians.

  12. SU-D-BRA-07: A Phantom Study to Assess the Variability in Radiomics Features Extracted From Cone-Beam CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Fave, X; Fried, D [UT MD Anderson Cancer Center, Houston, TX (United States); UT Health Science Center Graduate School of Biomedical Sciences, Houston, TX (United States); Zhang, L; Yang, J; Balter, P; Followill, D; Gomez, D; Jones, A; Stingo, F; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Several studies have demonstrated the prognostic potential for texture features extracted from CT images of non-small cell lung cancer (NSCLC) patients. The purpose of this study was to determine if these features could be extracted with high reproducibility from cone-beam CT (CBCT) images in order for features to be easily tracked throughout a patient’s treatment. Methods: Two materials in a radiomics phantom, designed to approximate NSCLC tumor texture, were used to assess the reproducibility of 26 features. This phantom was imaged on 9 CBCT scanners, including Elekta and Varian machines. Thoracic and head imaging protocols were acquired on each machine. CBCT images from 27 NSCLC patients imaged using the thoracic protocol on Varian machines were obtained for comparison. The variance for each texture measured from these patients was compared to the variance in phantom values for different manufacturer/protocol subsets. Levene’s test was used to identify features which had a significantly smaller variance in the phantom scans versus the patient data. Results: Approximately half of the features (13/26 for material1 and 15/26 for material2) had a significantly smaller variance (p<0.05) between Varian thoracic scans of the phantom compared to patient scans. Many of these same features remained significant for the head scans on Varian (12/26 and 8/26). However, when thoracic scans from Elekta and Varian were combined, only a few features were still significant (4/26 and 5/26). Three features (skewness, coarsely filtered mean and standard deviation) were significant in almost all manufacturer/protocol subsets. Conclusion: Texture features extracted from CBCT images of a radiomics phantom are reproducible and show significantly less variation than the same features measured from patient images when images from the same manufacturer or with similar parameters are used. Reproducibility between CBCT scanners may be high enough to allow the extraction of

  13. THE MEASUREMENT OF BONE QUALITY USING GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURAL FEATURES.

    Science.gov (United States)

    Shirvaikar, Mukul; Huang, Ning; Dong, Xuanliang Neil

    2016-10-01

    In this paper, statistical methods for the estimation of bone quality to predict the risk of fracture are reported. Bone mineral density and bone architecture properties are the main contributors of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but does not include architectural information to enhance the prediction of bone fragility. Other modalities are not practical due to cost and access considerations. This study investigates statistical parameters based on the Gray Level Co-occurrence Matrix (GLCM) extracted from two-dimensional projection images and explores links with architectural properties and bone mechanics. Data analysis was conducted on Micro-CT images of 13 trabecular bones (with an in-plane spatial resolution of about 50μm). Ground truth data for bone volume fraction (BV/TV), bone strength and modulus were available based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed Cluster-Shade was strongly correlated with the microarchitecture of the trabecular bone and related to mechanical properties. Once the principle thesis of utilizing second-order statistics is established, it can be extended to other modalities, providing cost and convenience advantages for patients and doctors.

  14. Extraction of sandy bedforms features through geodesic morphometry

    Science.gov (United States)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  15. GPU Accelerated Automated Feature Extraction From Satellite Images

    Directory of Open Access Journals (Sweden)

    K. Phani Tejaswi

    2013-04-01

    Full Text Available The availability of large volumes of remote sensing data insists on higher degree of automation in featureextraction, making it a need of thehour. Fusingdata from multiple sources, such as panchromatic,hyperspectraland LiDAR sensors, enhances the probability of identifying and extracting features such asbuildings, vegetation or bodies of water by using a combination of spectral and elevation characteristics.Utilizing theaforementioned featuresin remote sensing is impracticable in the absence ofautomation.Whileefforts are underway to reduce human intervention in data processing, this attempt alone may notsuffice. Thehuge quantum of data that needs to be processed entailsaccelerated processing to be enabled.GPUs, which were originally designed to provide efficient visualization,arebeing massively employed forcomputation intensive parallel processing environments. Image processing in general and hence automatedfeatureextraction, is highly computation intensive, where performance improvements have a direct impacton societal needs. In this context, an algorithm has been formulated for automated feature extraction froma panchromatic or multispectral image based on image processing techniques.Two Laplacian of Guassian(LoGmasks were applied on the image individually followed by detection of zero crossing points andextracting the pixels based on their standard deviationwiththe surrounding pixels. The two extractedimages with different LoG masks were combined together which resulted in an image withthe extractedfeatures and edges.Finally the user is at liberty to apply the image smoothing step depending on the noisecontent in the extracted image.The image ispassed through a hybrid median filter toremove the salt andpepper noise from the image.This paper discusses theaforesaidalgorithmforautomated featureextraction, necessity of deployment of GPUs for thesame;system-level challenges and quantifies thebenefits of integrating GPUs in such environment. The

  16. Analysis of GLCM Parameters for Textures Classification on UMD Database Images

    OpenAIRE

    Mohamed, Alsadegh Saleh Saied; Lu, Joan

    2015-01-01

    Texture analysis is one of the most important techniques that have been used in image processing for many purposes, including image classification. The texture determines the region of a given gray level image, and reflects its relevant information. Several methods of analysis have been invented and developed to deal with texture in recent years, and each one has its own method of extracting features from the texture. These methods can be divided into two main approaches: statistical methods ...

  17. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  18. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  19. 基于纹理特征的车牌定位方法%License plate location based on texture features

    Institute of Scientific and Technical Information of China (English)

    李文锋; 张红英

    2014-01-01

    In this paper, license plate location method based on texture features is presented. Firstly, picture edge is extracted, the edge points which the horizontal distance is relatively close are connected, and using mathematical morphology operation to form several candidate regions. Then, accurately locating license plate according to horizontal edge length and projection histogram. At last, the real license plate area is chosen from candidate area based on the license plate size. The experimental results demonstrate that the locating accuracy of this method can reach 94.6%, and the average locating time is 450 milliseconds.%提出了一种基于纹理的车牌定位方法。首先提取汽车图像的边缘,再连接水平方向上距离较近的边缘点,通过数学形态学操作形成若干候选区域,然后根据水平方向边缘线段长度和投影直方图进行精确定位,最后根据尺寸判断候选区是否为车牌。实验结果表明,该算法定位准确率能达到93.7%,平均定位时间为435 ms。

  20. SU-E-J-251: Incorporation of Pre-Therapy 18F-FDG Uptake with CT Texture Features in a Predictive Model for Radiation Pneumonitis Development

    Energy Technology Data Exchange (ETDEWEB)

    Anthony, G; Cunliffe, A; Armato, S; Al-Hallaq, H [The University of Chicago, Chicago, IL (United States); Castillo, R [Univ Texas Medical Branch of Galveston, Pearland, TX (United States); Pham, N [Baylor College of Medicine, Houston, TX (United States); Guerrero, T [Beaumont Health System, Royal Oak, MI (United States)

    2015-06-15

    Purpose: To determine whether the addition of standardized uptake value (SUV) statistical variables to CT lung texture features can improve a predictive model of radiation pneumonitis (RP) development in patients undergoing radiation therapy. Methods: Anonymized data from 96 esophageal cancer patients (18 RP-positive cases of Grade ≥ 2) were retrospectively collected including pre-therapy PET/CT scans, pre-/posttherapy diagnostic CT scans and RP status. Twenty texture features (firstorder, fractal, Laws’ filter and gray-level co-occurrence matrix) were calculated from diagnostic CT scans and compared in anatomically matched regions of the lung. The mean, maximum, standard deviation, and 50th–95th percentiles of the SUV values for all lung voxels in the corresponding PET scans were acquired. For each texture feature, a logistic regression-based classifier consisting of (1) the average change in that texture feature value between the pre- and post-therapy CT scans and (2) the pre-therapy SUV standard deviation (SUV{sub SD}) was created. The RP-classification performance of each logistic regression model was compared to the performance of its texture feature alone by computing areas under the receiver operating characteristic curves (AUCs). T-tests were performed to determine whether the mean AUC across texture features changed significantly when SUV{sub SD} was added to the classifier. Results: The AUC for single-texturefeature classifiers ranged from 0.58–0.81 in high-dose (≥ 30 Gy) regions of the lungs and from 0.53–0.71 in low-dose (< 10 Gy) regions. Adding SUVSD in a logistic regression model using a 50/50 data partition for training and testing significantly increased the mean AUC by 0.08, 0.06 and 0.04 in the low-, medium- and high-dose regions, respectively. Conclusion: Addition of SUVSD from a pre-therapy PET scan to a single CT-based texture feature improves RP-classification performance on average. These findings demonstrate the potential for

  1. Computer-aided diagnosis of psoriasis skin images with HOS, texture and color features: A first comparative study of its kind.

    Science.gov (United States)

    Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S

    2016-04-01

    Psoriasis is an autoimmune skin disease with red and scaly plaques on skin and affecting about 125 million people worldwide. Currently, dermatologist use visual and haptic methods for diagnosis the disease severity. This does not help them in stratification and risk assessment of the lesion stage and grade. Further, current methods add complexity during monitoring and follow-up phase. The current diagnostic tools lead to subjectivity in decision making and are unreliable and laborious. This paper presents a first comparative performance study of its kind using principal component analysis (PCA) based CADx system for psoriasis risk stratification and image classification utilizing: (i) 11 higher order spectra (HOS) features, (ii) 60 texture features, and (iii) 86 color feature sets and their seven combinations. Aggregate 540 image samples (270 healthy and 270 diseased) from 30 psoriasis patients of Indian ethnic origin are used in our database. Machine learning using PCA is used for dominant feature selection which is then fed to support vector machine classifier (SVM) to obtain optimized performance. Three different protocols are implemented using three kinds of feature sets. Reliability index of the CADx is computed. Among all feature combinations, the CADx system shows optimal performance of 100% accuracy, 100% sensitivity and specificity, when all three sets of feature are combined. Further, our experimental result with increasing data size shows that all feature combinations yield high reliability index throughout the PCA-cutoffs except color feature set and combination of color and texture feature sets. HOS features are powerful in psoriasis disease classification and stratification. Even though, independently, all three set of features HOS, texture, and color perform competitively, but when combined, the machine learning system performs the best. The system is fully automated, reliable and accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Real-time hypothesis driven feature extraction on parallel processing architectures

    DEFF Research Database (Denmark)

    Granmo, O.-C.; Jensen, Finn Verner

    2002-01-01

    Feature extraction in content-based indexing of media streams is often computational intensive. Typically, a parallel processing architecture is necessary for real-time performance when extracting features brute force. On the other hand, Bayesian network based systems for hypothesis driven feature......, rather than one-by-one. Thereby, the advantages of parallel feature extraction can be combined with the advantages of hypothesis driven feature extraction. The technique is based on a sequential backward feature set search and a correlation based feature set evaluation function. In order to reduce...

  3. Analyzing edge detection techniques for feature extraction in dental radiographs

    Directory of Open Access Journals (Sweden)

    Kanika Lakhani

    2016-09-01

    Full Text Available Several dental problems can be detected using radiographs but the main issue with radiographs is that they are not very prominent. In this paper, two well known edge detection techniques have been implemented for a set of 20 radiographs and number of pixels in each image has been calculated. Further, Gaussian filter has been applied over the images to smoothen the images so as to highlight the defect in the tooth. If the images data are available in the form of pixels for both healthy and decayed tooth, the images can easily be compared using edge detection techniques and the diagnosis is much easier. Further, Laplacian edge detection technique is applied to sharpen the edges of the given image. The aim is to detect discontinuities in dental radiographs when compared to original healthy tooth. Future work includes the feature extraction on the images for the classification of dental problems.

  4. Research on Feature Extraction of Remnant Particles of Aerospace Relays

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The existence of remnant particles, which significantly reduce the reliability of relays, is a serious problem for aerospace relays.The traditional method for detecting remnant particles-particle impact noise detection (PIND)-can be used merely to detect the existence of the particle; it is not able to provide any information about the particles' material. However, information on the material of the particles is very helpful for analyzing the causes of remnants. By analyzing the output acoustic signals from a PIND tester, this paper proposes three feature extraction methods: unit energy average pulse durative time, shape parameter of signal power spectral density(PSD), and pulse linear predictive coding coefficient sequence. These methods allow identified remnants to be classified into four categories based on their material. Furthermore, we prove the validity of this new method by processing PIND signals from actual tests.

  5. Transmission line icing prediction based on DWT feature extraction

    Science.gov (United States)

    Ma, T. N.; Niu, D. X.; Huang, Y. L.

    2016-08-01

    Transmission line icing prediction is the premise of ensuring the safe operation of the network as well as the very important basis for the prevention of freezing disasters. In order to improve the prediction accuracy of icing, a transmission line icing prediction model based on discrete wavelet transform (DWT) feature extraction was built. In this method, a group of high and low frequency signals were obtained by DWT decomposition, and were fitted and predicted by using partial least squares regression model (PLS) and wavelet least square support vector model (w-LSSVM). Finally, the final result of the icing prediction was obtained by adding the predicted values of the high and low frequency signals. The results showed that the method is effective and feasible in the prediction of transmission line icing.

  6. New feature extraction in gene expression data for tumor classification

    Institute of Scientific and Technical Information of China (English)

    HE Renya; CHENG Qiansheng; WU Lianwen; YUAN Kehong

    2005-01-01

    Using gene expression data to discriminate tumor from the normal ones is a powerful method. However, it is sometimes difficult because the gene expression data are in high dimension and the object number of the data sets is very small. The key technique is to find a new gene expression profiling that can provide understanding and insight into tumor related cellular processes. In this paper, we propose a new feature extraction method based on variance to the center of the class and employ the support vector machine to recognize the gene data either normal or tumor. Two tumor data sets are used to demonstrate the effectiveness of our methods. The results show that the performance has been significantly improved.

  7. Online feature extraction for the PANDA electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Guliyev, Elmaddin; Tambave, Ganesh; Kavatsyuk, Myroslav; Loehner, Herbert [KVI, University of Groningen (Netherlands); Collaboration: PANDA-Collaboration

    2011-07-01

    Resonances in the charmonium mass region will be studied in antiproton annihilations at FAIR with the multi-purpose PANDA spectrometer providing measurements of electromagnetic signals in a wide dynamic range. The Sampling ADC (SADC) readout of the Electromagnetic Calorimeter (EMC) will allow to realize online hit-detection on the single-channel level and to derive time and energy information. A digital filtering and feature-extraction algorithm was developed and implemented in VHDL code for the online application in a commercial SADC. We discuss the readout scheme, the program logic, the precise signal amplitude detection with phase correction at low sampling frequencies, and the usage of a double moving-window deconvolution filter for the pulse-shape restoration. Such double filtering allows to operate the EMC at much higher rates and to minimize the amount of pile-up events.

  8. PCA Fault Feature Extraction in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2010-08-01

    Full Text Available Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc. may change significantly. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Therefore, utilizing real-time measurements of phasor measurement unit, based on principal components analysis technology, we have extracted successfully the distinct features of fault component. Of course, because of the complexity of different types of faults in electric power system, there still exists enormous problems need a close and intensive study.

  9. FEATURE EXTRACTION OF BONES AND SKIN BASED ON ULTRASONIC SCANNING

    Institute of Scientific and Technical Information of China (English)

    Zheng Shuxian; Zhao Wanhua; Lu Bingheng; Zhao Zhao

    2005-01-01

    In the prosthetic socket design, aimed at the high cost and radiation deficiency caused by CT scanning which is a routine technique to obtain the cross-sectional image of the residual limb, a new ultrasonic scanning method is developed to acquire the bones and skin contours of the residual limb. Using a pig fore-leg as the scanning object, an overlapping algorithm is designed to reconstruct the 2D cross-sectional image, the contours of the bone and skin are extracted using edge detection algorithm and the 3D model of the pig fore-leg is reconstructed by using reverse engineering technology. The results of checking the accuracy of the image by scanning a cylinder work pieces show that the extracted contours of the cylinder are quite close to the standard circumference. So it is feasible to get the contours of bones and skin by ultrasonic scanning. The ultrasonic scanning system featuring no radiation and low cost is a kind of new means of cross section scanning for medical images.

  10. Fish Recognition Based on Robust Features Extraction from Size and Shape Measurements Using Neural Network

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2010-01-01

    Full Text Available Problem statement: Image recognition is a challenging problem researchers had been research into this area for so long especially in the recent years, due to distortion, noise, segmentation errors, overlap and occlusion of objects in digital images. In our study, there are many fields concern with pattern recognition, for example, fingerprint verification, face recognition, iris discrimination, chromosome shape discrimination, optical character recognition, texture discrimination and speech recognition, the subject of pattern recognition appears. A system for recognizing isolated pattern of interest may be as an approach for dealing with such application. Scientists and engineers with interests in image processing and pattern recognition have developed various approaches to deal with digital image recognition problems such as, neural network, contour matching and statistics. Approach: In this study, our aim was to recognize an isolated pattern of interest in the image based on the combination between robust features extraction. Where depend on size and shape measurements, that were extracted by measuring the distance and geometrical measurements. Results: We presented a system prototype for dealing with such problem. The system started by acquiring an image containing pattern of fish, then the image features extraction is performed relying on size and shape measurements. Our system has been applied on 20 different fish families, each family has a different number of fish types and our sample consists of distinct 350 of fish images. These images were divided into two datasets: 257 training images and 93 testing images. An overall accuracy was obtained using the neural network associated with the back-propagation algorithm was 86% on the test dataset used. Conclusion: We developed a classifier for fish images recognition. We efficiently have chosen a features extraction method to fit our demands. Our classifier successfully design and implement a

  11. Extraction of Facial Feature Points Using Cumulative Histogram

    CERN Document Server

    Paul, Sushil Kumar; Bouakaz, Saida

    2012-01-01

    This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nos...

  12. Pomegranate peel and peel extracts: chemistry and food features.

    Science.gov (United States)

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  13. Filter and Filter Bank Design for Image Texture Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Randen, Trygve

    1997-12-31

    The relevance of this thesis to energy and environment lies in its application to remote sensing such as for instance sea floor mapping and seismic pattern recognition. The focus is on the design of two-dimensional filters for feature extraction, segmentation, and classification of digital images with textural content. The features are extracted by filtering with a linear filter and estimating the local energy in the filter response. The thesis gives a review covering broadly most previous approaches to texture feature extraction and continues with proposals of some new techniques. 143 refs., 59 figs., 7 tabs.

  14. Melt extraction from crystal mushes: Numerical model of texture evolution and calibration of crystallinity-ordering relationships

    Science.gov (United States)

    Špillar, Václav; Dolejš, David

    2015-12-01

    Mechanical crystal-melt interactions in magmatic systems by separation or accumulation of crystals or by extraction of interstitial melt are expected to modify the spatial distribution of crystals observed as phenocrysts in igneous rocks. Textural analysis of porphyritic products can thus provide a quantitative means of interpreting the magnitude of crystal accumulation or melt loss and reconstructing the initial crystal percentage, at which the process occurred. We present a new three-dimensional numerical model that evaluates the effects of crystal accumulation (or interstitial melt removal) on the spatial distribution of crystals. Both processes lead to increasing apparent crystallinity but also to increasing spatial ordering expressed by the clustering index (R). The trend of progressive crystal packing deviates from a random texture trend, produced by static crystal nucleation and growth, and it is universal for any texture with straight log-linear crystal size distribution. For sparse crystal suspensions (5 vol. % crystals, R = 1.03), up to 97% melt can be extracted, corresponding to a new crystallinity of 65 vol.% and R = 1.32, when the rheological threshold of crystal interlocking is reached. For initially crystal-rich suspensions, the compaction path is shorter, this is because the initial crystal population is more aggregated and it reaches the limit of interlocking sooner. Crystal suspensions with ~ 35 vol.% crystals cannot be compacted without mechanical failure. These results illustrate that the onset of the rheological threshold of magma immobility strongly depends on the spatial configuration of crystals in the mush: the primary rigid percolation threshold (~ 35 vol.% crystals) corresponds to touching or interlocking crystal framework produced by in situ closed-system crystallization, whereas the secondary rigid percolation threshold (~ 35 to ~ 75 vol.% crystals) can be reached by compaction, which is particularly spatially efficient when acting on

  15. FUSION OF WAVELET AND CURVELET COEFFICIENTS FOR GRAY TEXTURE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    M. Santhanalakshmi

    2014-05-01

    Full Text Available This study presents a framework for gray texture classification based on the fusion of wavelet and curvelet features. The two main frequency domain transformations Discrete Wavelet Transform (DWT and Discrete Curvelet Transform (DCT are analyzed. The features are extracted from the DWT and DCT decomposed image separately and their performance is evaluated independently. Then feature fusion technique is applied to increase the classification accuracy of the proposed approach. Brodatz texture images are used for this study. The results show that, only two texture images D105 and D106 are misclassified by the fusion approach and 99.74% classification accuracy is obtained.

  16. Texture-Analysis-Incorporated Wind Parameters Extraction from Rain-Contaminated X-Band Nautical Radar Images

    Directory of Open Access Journals (Sweden)

    Weimin Huang

    2017-02-01

    Full Text Available In this paper, a method for extracting wind parameters from rain-contaminated X-band nautical radar images is presented. The texture of the radar image is first generated based on spatial variability analysis. Through this process, the rain clutter in an image can be removed while the wave echoes are retained. The number of rain-contaminated pixels in each azimuthal direction of the texture is estimated, and this is used to determine the azimuthal directions in which the rain-contamination is negligible. Then, the original image data in these directions are selected for wind direction and speed retrieval using the modified intensity-level-selection-based wind algorithm. The proposed method is applied to shipborne radar data collected from the east Coast of Canada. The comparison of the radar results with anemometer data shows that the standard deviations of wind direction and speed using the rain mitigation technique can be reduced by about 14.5° and 1.3 m/s, respectively.

  17. Texture feature extraction for the lung lesion density classification on computed tomography scan image

    Directory of Open Access Journals (Sweden)

    Hasnely

    2016-05-01

    Full Text Available The radiology examination by computed tomography (CT scan is an early detection of lung cancer to minimize the mortality rate. However, the assessment and diagnosis by an expert are subjective depending on the competence and experience of a radiologist. Hence, a digital image processing of CT scan is necessary as a tool to diagnose the lung cancer. This research proposes a morphological characteristics method for detecting lung cancer lesion density by using the histogram and GLCM (Gray Level Co-occurrence Matrices. The most well-known artificial neural network (ANN architecture that is the multilayers perceptron (MLP, is used in classifying lung cancer lesion density of heterogeneous and homogeneous. Fifty CT scan images of lungs obtained from the Department of Radiology of RSUP Dr. Sardjito Hospital, Yogyakarta are used as the database. The results show that the proposed method achieved the accuracy of 98%, sensitivity of 96%, and specificity of 96%.

  18. Alkali-aided protein extraction from chicken dark meat: textural properties and color characteristics of recovered proteins.

    Science.gov (United States)

    Omana, D A; Moayedi, V; Xu, Y; Betti, M

    2010-05-01

    Textural properties, water-holding capacity, and color characteristics of alkali-extracted chicken dark meat have been studied. Alkali extraction was carried out at 4 different pH values (10.5, 11.0, 11.5, and 12.0). At higher pH of extraction, cooking loss and water loss were found to be significantly decreased (P extraction pH values. Protein samples extracted at higher pH values were found to be harder, and the maximum (4,956 g of force) value was shown by samples prepared at pH 11.5. Chewiness values were significantly increased (P protein samples extracted at pH values of 11.5 and 12.0. Dynamic viscoelastic behavior of samples was assessed in the temperature range of 7 to 100 degrees C. The dynamic viscoelastic behavior of raw chicken dark meat as revealed by storage modulus indicated considerable gel-forming ability. The maximum storage modulus (G') value of 439 kPa was measured at 66.7 degrees C. Storage modulus was found to decrease for the recovered protein samples and be lowest at higher pH values. However, the recovered protein samples did show substantial gel-forming ability when stored with cryoprotectants. Tan delta values denoted 2 clear transitions for raw dark meat; however, only 1 major transition at 50.1 degrees C was evident for pH-treated samples, probably reflecting the loss of collagen in processing. In conclusion, this process of protein recovery may offer the possibility to use the underused poultry resources for preparation of functional foods.

  19. Texture mapping for feature based on points constraint%基于特征点约束的人脸纹理映射

    Institute of Scientific and Technical Information of China (English)

    王法强; 耿国华; 李康; 贺毅岳

    2012-01-01

    Texture mapping for feature is a very special technology of realistic processing in computer-aided craniofacial rehabilitation. To realize the texture mapping for local organs of 3D human face accurately, this paper introduced a texture mapping method for feature based on points constraint. It used least squares conformal maps to realize this constraint. Using front photos for actual texture mapping proved that the method was effective. The experiment results show that this method is robust and efficient, and reduced the complexity of the algorithm.%人脸纹理映射技术是计算机辅助颅骨面貌复原中一种特殊的真实感处理技术.针对人脸面部器官纹理映射难以准确实现的问题,提出一种基于特征点约束的人脸纹理映射方法.利用最小二乘保角映射参数化时需固定顶点来完成特征点约束.通过对大量单张、正面照片作为纹理进行映射,证实了该方法能够取得良好的映射效果.实验结果表明本方法鲁棒且效率高,降低了算法的复杂性.

  20. Identifying metastatic breast tumors using textural kinetic features of a contrast based habitat in DCE-MRI

    Science.gov (United States)

    Chaudhury, Baishali; Zhou, Mu; Goldgof, Dmitry B.; Hall, Lawrence O.; Gatenby, Robert A.; Gillies, Robert J.; Drukteinis, Jennifer S.

    2015-03-01

    The ability to identify aggressive tumors from indolent tumors using quantitative analysis on dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) would dramatically change the breast cancer treatment paradigm. With this prognostic information, patients with aggressive tumors that have the ability to spread to distant sites outside of the breast could be selected for more aggressive treatment and surveillance regimens. Conversely, patients with tumors that do not have the propensity to metastasize could be treated less aggressively, avoiding some of the morbidity associated with surgery, radiation and chemotherapy. We propose a computer aided detection framework to determine which breast cancers will metastasize to the loco-regional lymph nodes as well as which tumors will eventually go on to develop distant metastses using quantitative image analysis and radiomics. We defined a new contrast based tumor habitat and analyzed textural kinetic features from this habitat for classification purposes. The proposed tumor habitat, which we call combined-habitat, is derived from the intersection of two individual tumor sub-regions: one that exhibits rapid initial contrast uptake and the other that exhibits rapid delayed contrast washout. Hence the combined-habitat represents the tumor sub-region within which the pixels undergo both rapid initial uptake and rapid delayed washout. We analyzed a dataset of twenty-seven representative two dimensional (2D) images from volumetric DCE-MRI of breast tumors, for classification of tumors with no lymph nodes from tumors with positive number of axillary lymph nodes. For this classification an accuracy of 88.9% was achieved. Twenty of the twenty-seven patients were analyzed for classification of distant metastatic tumors from indolent cancers (tumors with no lymph nodes), for which the accuracy was 84.3%.

  1. Strategy of experimental design for intensification of solvent extraction of natural antioxidant flavonoids and phenols from buckthorn textured leaves

    Directory of Open Access Journals (Sweden)

    Baya Berka

    2015-12-01

    Full Text Available Prior to solvent extraction of plant-based active molecules, adequate texturing by Détente Instantanée Contrôlée (DIC; French for “Instant controlled pressure drop” results in overcoming the slow diffusion of the solvent/solute through the solid matrix. This work aimed at determining the impact of DIC pretreatments on buckthorn (Rhamnus alaternus L. morphology. DIC-operating parameters were selected as the saturated steam pressure, the thermal treatment time, and the number of cycles. A three-parameter five-level response surface method was used to optimize DIC processing parameters. Response factors were the overall and individual yields of flavonol aglycone extraction, and antioxidant activity of both expanded dried material (swell dried leaves and extracts. The yield of flavonol aglycones was 18.23 mg Kaemp eq/g dry basis (mg Kaemp eq/g db in 3 min for DIC-treated buckthorn, against 12.24 mg Kaemp eq/g db in 150 min for untreated natural buckthorn raw material. Furthermore, the antioxidant activity of DIC-treated material was exceptionally higher and the effectiveness of reducing power of DPPH radical was 68 times more than the untreated plant material.

  2. Feature extraction and models for speech: An overview

    Science.gov (United States)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  3. Appearance and characterization of fruit image textures for quality sorting using wavelet transform and genetic algorithms.

    Science.gov (United States)

    Khoje, Suchitra

    2017-07-24

    Images of four qualities of mangoes and guavas are evaluated for color and textural features to characterize and classify them, and to model the fruit appearance grading. The paper discusses three approaches to identify most discriminating texture features of both the fruits. In the first approach, fruit's color and texture features are selected using Mahalanobis distance. A total of 20 color features and 40 textural features are extracted for analysis. Using Mahalanobis distance and feature intercorrelation analyses, one best color feature (mean of a* [L*a*b* color space]) and two textural features (energy a*, contrast of H*) are selected as features for Guava while two best color features (R std, H std) and one textural features (energy b*) are selected as features for mangoes with the highest discriminate power. The second approach studies some common wavelet families for searching the best classification model for fruit quality grading. The wavelet features extracted from five basic mother wavelets (db, bior, rbior, Coif, Sym) are explored to characterize fruits texture appearance. In third approach, genetic algorithm is used to select only those color and wavelet texture features that are relevant to the separation of the class, from a large universe of features. The study shows that image color and texture features which were identified using a genetic algorithm can distinguish between various qualities classes of fruits. The experimental results showed that support vector machine classifier is elected for Guava grading with an accuracy of 97.61% and artificial neural network is elected from Mango grading with an accuracy of 95.65%. The proposed method is nondestructive fruit quality assessment method. The experimental results has proven that Genetic algorithm along with wavelet textures feature has potential to discriminate fruit quality. Finally, it can be concluded that discussed method is an accurate, reliable, and objective tool to determine fruit

  4. Feature Extraction with Ordered Mean Values for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

  5. A NEW TEXTURE IMAGE RETRIEVAL WAY

    Institute of Scientific and Technical Information of China (English)

    Wang Zuyuan; Luo Lin; Zhuang Zhenquan

    2001-01-01

    This paper proposes a new texture image retrieval method for the considering of the population search and random information exchange merits of evolving programming which can be used to optimize image feature vector extraction. The experimental results show that this way can efficiently improve the retrieval accuracy and realize fasttips with the advantage of evolving programming algorithm.

  6. A feature extraction technique based on character geometry for character recognition

    CERN Document Server

    Gaurav, Dinesh Dileep

    2012-01-01

    This paper describes a geometry based technique for feature extraction applicable to segmentation-based word recognition systems. The proposed system extracts the geometric features of the character contour. This features are based on the basic line types that forms the character skeleton. The system gives a feature vector as its output. The feature vectors so generated from a training set, were then used to train a pattern recognition engine based on Neural Networks so that the system can be benchmarked.

  7. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    Science.gov (United States)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  8. CLASSIFICATION OF INFORMAL SETTLEMENTS THROUGH THE INTEGRATION OF 2D AND 3D FEATURES EXTRACTED FROM UAV DATA

    Directory of Open Access Journals (Sweden)

    C. M. Gevaert

    2016-06-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  9. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically ex

  10. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  11. Fish Classification Based on Robust Features Extraction From Color Signature Using Back-Propagation Classifier

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2011-01-01

    Full Text Available Problem statement: Image recognition was a challenging problem researchers had been research into this area for so long especially in the recent years, due to distortion, noise, segmentation errors, overlap and occlusion of objects in digital images. In our study, there are many fields concern with pattern recognition, for example, fingerprint verification, face recognition, iris discrimination, chromosome shape discrimination, optical character recognition, texture discrimination and speech recognition, the subject of pattern recognition appears. A system for recognizing isolated pattern of interest may be as an approach for dealing with such application. Scientists and engineers with interests in image processing and pattern recognition have developed various approaches to deal with digital image recognition problems such as, neural network, contour matching and statistics. Approach: In this study, our aim was to recognize an isolated pattern of interest (fish in the image based robust features extraction. Where depend on color signatures that are extracted by RGB color space, color histogram and gray level co-occurrence matrix. Results: We presented a system prototype for dealing with such problem. The system started by acquiring an image containing pattern of fish, then the image segmentation was performed relying on color signature. Our system has been applied on 20 different fish families, each family has a different number of fish types and our sample consists of distinct 610 of fish images. These images are divided into two datasets: 400 training images and 210 testing images. An overall accuracy was obtained using back-propagation classifier was 84% on the test dataset used. Conclusion: We developed a classifier for fish images recognition. We efficiently have chosen an image segmentation method to fit our demands. Our classifier successfully design and implement a decision which performed efficiently without any

  12. {sup 18}F-FDG PET/CT heterogeneity quantification through textural features in the era of harmonisation programs: a focus on lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lasnon, Charline [University Hospital, Nuclear Medicine Department, Caen (France); Biologie et Therapies Innovantes des Cancers Localement Agressifs, Universite de Caen Normandie, INSERM, Caen (France); Normandie University, Caen (France); Majdoub, Mohamed; Lavigne, Brice; Visvikis, Dimitris [LaTIM, INSERM UMR 1101, Brest (France); Do, Pascal [Thoracic Oncology, Francois Baclesse Cancer Centre, Caen (France); Madelaine, Jeannick [Caen University Hospital, Pulmonology Department, Caen (France); Hatt, Mathieu [LaTIM, INSERM UMR 1101, Brest (France); CHRU Morvan, INSERM UMR 1101, Laboratoire de Traitement de l' Information Medicale (LaTIM), Groupe ' Imagerie multi-modalite quantitative pour le diagnostic et la therapie' , Brest (France); Aide, Nicolas [University Hospital, Nuclear Medicine Department, Caen (France); Biologie et Therapies Innovantes des Cancers Localement Agressifs, Universite de Caen Normandie, INSERM, Caen (France); Normandie University, Caen (France); Caen University Hospital, Nuclear Medicine Department, Caen (France)

    2016-12-15

    Quantification of tumour heterogeneity in PET images has recently gained interest, but has been shown to be dependent on image reconstruction. This study aimed to evaluate the impact of the EANM/EARL accreditation program on selected {sup 18}F-FDG heterogeneity metrics. To carry out our study, we prospectively analysed 71 tumours in 60 biopsy-proven lung cancer patient acquisitions reconstructed with unfiltered point spread function (PSF) positron emission tomography (PET) images (optimised for diagnostic purposes), PSF-reconstructed images with a 7-mm Gaussian filter (PSF{sub 7}) chosen to meet European Association of Nuclear Medicine (EANM) 1.0 harmonising standards, and EANM Research Ltd. (EARL)-compliant ordered subset expectation maximisation (OSEM) images. Delineation was performed with fuzzy locally adaptive Bayesian (FLAB) algorithm on PSF images and reported on PSF{sub 7} and OSEM ones, and with a 50 % standardised uptake values (SUV){sub max} threshold (SUV{sub max50%}) applied independently to each image. Robust and repeatable heterogeneity metrics including 1st-order [area under the curve of the cumulative histogram (CH{sub AUC})], 2nd-order (entropy, correlation, and dissimilarity), and 3rd-order [high-intensity larger area emphasis (HILAE) and zone percentage (ZP)] textural features (TF) were statistically compared. Volumes obtained with SUV{sub max50%} were significantly smaller than FLAB-derived ones, and were significantly smaller in PSF images compared to OSEM and PSF{sub 7} images. PSF-reconstructed images showed significantly higher SUVmax and SUVmean values, as well as heterogeneity for CH{sub AUC}, dissimilarity, correlation, and HILAE, and a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. Histological subtypes had no impact on TF distribution. No significant difference was observed between any of the considered metrics (SUV or heterogeneity features) that we

  13. The role of the complex textural microstructure co-occurrence matrices, based on Laws’ features, in the characterization and recognition of some pathological structures, from ultrasound images

    Directory of Open Access Journals (Sweden)

    Delia Alexandrina Mitrea

    2016-03-01

    Full Text Available The non-invasive diagnosis, based on ultrasound images, is a challenge in nowadays research. We develop computerized, texture-based methods, for automatic and computer assisted diagnosis, using the information obtained from ultrasound images. In this work, we defined the co-occurrence matrix of complex textural microstructures determined by using the Laws’ convolution filters and we experimented it in order to perform the characterization and recognition of some important anatomical and pathological structures, within ultrasound images. These structures were the colorectal tumors and the gingival sulcus, the properties of the latter being important concerning the diagnosis and monitoring of the periodontal disease. We determined the textural model of these structures, using the classical and the newly defined textural features. For the automatic recognition, we used powerful classifiers, such as the Multilayer Perceptron, the Support-Vector Machines, decision-trees based classifiers such as Random Forest and C4.5, respectively AdaBoost in combination with the C4.5 algorithm.

  14. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  15. Sustainable rehabilitation of mining waste and acid mine drainage using geochemistry, mine type, mineralogy, texture, ore extraction and climate knowledge.

    Science.gov (United States)

    Anawar, Hossain Md

    2015-08-01

    The oxidative dissolution of sulfidic minerals releases the extremely acidic leachate, sulfate and potentially toxic elements e.g., As, Ag, Cd, Cr, Cu, Hg, Ni, Pb, Sb, Th, U, Zn, etc. from different mine tailings and waste dumps. For the sustainable rehabilitation and disposal of mining waste, the sources and mechanisms of contaminant generation, fate and transport of contaminants should be clearly understood. Therefore, this study has provided a critical review on (1) recent insights in mechanisms of oxidation of sulfidic minerals, (2) environmental contamination by mining waste, and (3) remediation and rehabilitation techniques, and (4) then developed the GEMTEC conceptual model/guide [(bio)-geochemistry-mine type-mineralogy- geological texture-ore extraction process-climatic knowledge)] to provide the new scientific approach and knowledge for remediation of mining wastes and acid mine drainage. This study has suggested the pre-mining geological, geochemical, mineralogical and microtextural characterization of different mineral deposits, and post-mining studies of ore extraction processes, physical, geochemical, mineralogical and microbial reactions, natural attenuation and effect of climate change for sustainable rehabilitation of mining waste. All components of this model should be considered for effective and integrated management of mining waste and acid mine drainage.

  16. [Classification technique for hyperspectral image based on subspace of bands feature extraction and LS-SVM].

    Science.gov (United States)

    Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian

    2011-05-01

    The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.

  17. Feature Extraction and Classification of Echo Signal of Ground Penetrating Radar

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hui-lin; TIAN Mao; CHEN Xiao-li

    2005-01-01

    Automatic feature extraction and classification algorithm of echo signal of ground penetrating radar is presented. Dyadic wavelet transform and the average energy of the wavelet coefficients are applied in this paper to decompose and extract feature of the echo signal. Then, the extracted feature vector is fed up to a feed-forward multi-layer perceptron classifier. Experimental results based on the measured GPR echo signals obtained from the Mei-shan railway are presented.

  18. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    Science.gov (United States)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  19. An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture

    Science.gov (United States)

    2015-11-04

    12·98) (e) FORM CANCELS AND SUPERSEOES AU. PREVIOUS VERSIONS RIVER RESEARCH AND APPLICATIONS River Res. Applic. (2013) Published online in Wiley ... Online Library (wileyonlinelibrary.com) DOI: 10.1002/rra.2701AN AUTOMATED APPROACH TO EXTRACTING RIVER BANK LOCATIONS FROM AERIAL IMAGERY USING IMAGE...Computer Science. Vol. 2. Pratt W. 1991. Digital Image Processing. John Wiley and Sons: New York. Sera J. 1983. Image Analysis and Mathematical Morphology

  20. Apriori and N-gram Based Chinese Text Feature Extraction Method

    Institute of Scientific and Technical Information of China (English)

    王晔; 黄上腾

    2004-01-01

    A feature extraction, which means extracting the representative words from a text, is an important issue in text mining field. This paper presented a new Apriori and N-gram based Chinese text feature extraction method, and analyzed its correctness and performance. Our method solves the question that the exist extraction methods cannot find the frequent words with arbitrary length in Chinese texts. The experimental results show this method is feasible.

  1. Transition Texture Synthesis

    Institute of Scientific and Technical Information of China (English)

    Yueh-Yi Lai; Wen-Kai Tai

    2008-01-01

    Synthesis of transition textures is essential for displaying visually acceptable appearances on a terrain. This investigation presents a modified method for synthesizing the transition texture to be tiled on a terrain. All transition pattern types are recognized for a number of input textures. The proposed modified patch-based sampling texture synthesis approach, using the extra feature map of the input source and target textures for patch matching, can synthesize any transition texture on a succession pattern by initializing the output texture using a portion of the source texture enclosed in a transition cut. The transition boundary is further enhanced to improve the visual effect by tracing out the integral texture elements. Either the Game of Life model or Wang tiles method are exploited to present a good-looking profile of successions on a terrain for tiling transition textures. Experimental results indicate that the proposed method requires few input textures, yet synthesizes numerous tileable transition textures, which are useful for obtaining a vivid appearance of a terrain.

  2. Spectrum based feature extraction using spectrum intensity ratio for SSVEP detection.

    Science.gov (United States)

    Itai, Akitoshi; Funase, Arao

    2012-01-01

    Recent years, a Steady-State Visual Evoked Potential (SSVEP) is used as a basis for Brain Computer Interface (BCI)[1]. Various feature extraction and classification techniques are proposed to achieve BCI based on SSVEP. The feature extraction of SSVEP is developed in the frequency domain regardless of the limitation in flickering frequency of visual stimulus caused by hardware architecture. We introduce here the feature extraction using a spectrum intensity ratio. Results show that the detection ratio reaches 84% by using a spectrum intensity ratio with unsupervised classification. It also indicates the SSVEP is enhanced by proposed feature extraction with second harmonic.

  3. PyEEG: an open source Python module for EEG/MEG feature extraction.

    Science.gov (United States)

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  4. Feature evaluation and extraction based on neural network in analog circuit fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    Yuan Haiying; Chen Guangju; Xie Yongle

    2007-01-01

    Choosing the right characteristic parameter is the key to fault diagnosis in analog circuit.The feature evaluation and extraction methods based on neural network are presented.Parameter evaluation of circuit features is realized by training results from neural network; the superior nonlinear mapping capability is competent for extracting fault features which are normalized and compressed subsequently.The complex classification problem on fault pattern recognition in analog circuit is transferred into feature processing stage by feature extraction based on neural network effectively, which improves the diagnosis efficiency.A fault diagnosis illustration validated this method.

  5. A fingerprint feature extraction algorithm based on curvature of Bezier curve

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Fingerprint feature extraction is a key step of fingerprint identification. A novel feature extraction algorithm is proposed in this paper, which describes fingerprint feature with the bending information of fingerprint ridges. Ridges in the specific region of fingerprint images are traced firstly in the algorithm, and then these ridges are fit with Bezier curve. Finally, the point that has the maximal curvature on Bezier curve is defined as a feature point. Experimental results demonstrate that this kind of feature points characterize the bending trend of fingerprint ridges effectively, and they are robust to noise, in addition, the extraction precision of this algorithm is also better than the conventional approaches.

  6. Feature Extraction of Chinese Materia Medica Fingerprint Based on Star Plot Representation of Multivariate Data

    Institute of Scientific and Technical Information of China (English)

    CUI Jian-xin; HONG Wen-xue; ZHOU Rong-juan; GAO Hai-bo

    2011-01-01

    Objective To study a novel feature extraction method of Chinese materia medica (CMM) fingerprint. Methods On the basis of the radar graphical presentation theory of multivariate, the radar map was used to figure the non-map parameters of the CMM fingerprint, then to extract the map features and to propose the feature fusion. Results Better performance was achieved when using this method to test data. Conclusion This shows that the feature extraction based on radar chart presentation can mine the valuable features that facilitate the identification of Chinese medicine.

  7. Object-oriented feature extraction approach for mapping supraglacial debris in Schirmacher Oasis using very high-resolution satellite data

    Science.gov (United States)

    Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.

    2016-05-01

    Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.

  8. A Neuro-Fuzzy based System for Classification of Natural Textures

    Science.gov (United States)

    Jiji, G. Wiselin

    2016-12-01

    A statistical approach based on the coordinated clusters representation of images is used for classification and recognition of textured images. In this paper, two issues are being addressed; one is the extraction of texture features from the fuzzy texture spectrum in the chromatic and achromatic domains from each colour component histogram of natural texture images and the second issue is the concept of a fusion of multiple classifiers. The implementation of an advanced neuro-fuzzy learning scheme has been also adopted in this paper. The results of classification tests show the high performance of the proposed method that may have industrial application for texture classification, when compared with other works.

  9. SAR Image Texture Analysis of Oil Spill

    Science.gov (United States)

    Ma, Long; Li, Ying; Liu, Yu

    Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they have serious affect on fragile marine and coastal ecosystem. In order to implement an emergency in case of oil spills, it is necessary to monitor oil spill using remote sensing. Spaceborne SAR is considered a promising method to monitor oil spill, which causes attention from many researchers. However, research in SAR image texture analysis of oil spill is rarely reported. On 7 December 2007, a crane-carrying barge hit the Hong Kong-registered tanker "Hebei Spirit", which released an estimated 10,500 metric tons of crude oil into the sea. The texture features on this oil spill were acquired based on extracted GLCM (Grey Level Co-occurrence Matrix) by using SAR as data source. The affected area was extracted successfully after evaluating capabilities of different texture features to monitor the oil spill. The results revealed that the texture is an important feature for oil spill monitoring. Key words: oil spill, texture analysis, SAR

  10. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    Science.gov (United States)

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  11. Directional EMD and its application to texture segmentation

    Institute of Scientific and Technical Information of China (English)

    LIU Zhongxuan; PENG Silong

    2005-01-01

    In this paper we present the definition and framework of Directional Empirical Mode Decomposition (DEMD) and use DEMD to do texture segmentation. As a new technique of time-frequency analysis, EMD decomposes signals by sifting and then analyzes the instantaneous frequency of the obtained components called Intrinsic Mode Functions (IMFs). Compared with Bidimensional EMD (BEMD) which only extracts textures by radial basis function interpolation, the virtues of DEMD include: the directional quality is considered in this framework; four features can be extracted for each point from the decomposition. The technique of selecting directions for DEMD based on texture's Wold theory is also presented. Experimental results indicate the effectiveness of the method for texture segmentation. In addition, we show the explanation for the DEMD's ability for texture classification from visual views.

  12. Feature extraction for target identification and image classification of OMIS hyperspectral image

    Institute of Scientific and Technical Information of China (English)

    DU Pei-jun; TAN Kun; SU Hong-jun

    2009-01-01

    In order to combine feature extraction operations with specific hyperspectrai remote sensing information processing objectives, two aspects of feature extraction were explored. Based on clustering and decision tree algorithm, spectral absorption index (SAI), continuum-removal and derivative spectral analysis were employed to discover characterized spectral features of dif-ferent targets, and decision trees for identifying a specific class and discriminating different classes were generated. By combining support vector machine (SVM) classifier with different feature extraction strategies including principal component analysis (PCA), minimum noise fraction (MNF), grouping PCA, and derivate spectral analysis, the performance of feature extraction approaches in classification was evaluated. The results show that feature extraction by PCA and derivate spectral analysis are effective to OMIS (operational modular imaging spectrometer) image classification using SVM, and SVM outperforms traditional SAM and MLC classifiers for OMIS data.

  13. Monitoring Thermal Coagulation with Ultrasonic Textures

    Institute of Scientific and Technical Information of China (English)

    YANG Wei; ZHANG Su; CHEN Ya-zhu; CHEN Lei; HU Bing; MA Wei-yin

    2007-01-01

    The feasibility of using B-mode ultrasound image textures and pattern recognition technique to characterize the thermal coagulation in vitro during radiofrequency ablation was investigated.The changes of ultrasonic textures in the different regions of samples varied with the heating time in the in-vitro experiments, which would result in that the coagulated and noncoagulated regions of tissue had different ultrasonic textures.Using support vector machine to extract the ultrasonic texture features and characterize the state of tissue, the size and boundaries of thermal lesions could be detected and measured more exactly than only using the gray scale information of B-mode ultrasound image.The proposed method would be applied to the image-guided radiofrequency ablation (IGRA) procedure for monitoring the thermal coagulation.

  14. Texture classification based on EMD and FFT

    Institute of Scientific and Technical Information of China (English)

    XIONG Chang-zhen; XU Jun-yi; ZOU Jian-cheng; QI Dong-xu

    2006-01-01

    Empirical mode decomposition (EMD) is an adaptive and approximately orthogonal filtering process that reflects human's visual mechanism of differentiating textures. In this paper, we present a modified 2D EMD algorithm using the FastRBF and an appropriate number of iterations in the shifting process (SP), then apply it to texture classification. Rotation-invariant texture feature vectors are extracted using auto-registration and circular regions of magnitude spectra of 2D fast Fourier transform(FFT). In the experiments, we employ a Bayesion classifier to classify a set of 15 distinct natural textures selected from the Brodatz album. The experimental results, based on different testing datasets for images with different orientations, show the effectiveness of the proposed classification scheme.

  15. Multi-Scale Analysis Based Curve Feature Extraction in Reverse Engineering

    Institute of Scientific and Technical Information of China (English)

    YANG Hongjuan; ZHOU Yiqi; CHEN Chengjun; ZHAO Zhengxu

    2006-01-01

    A sectional curve feature extraction algorithm based on multi-scale analysis is proposed for reverse engineering. The algorithm consists of two parts: feature segmentation and feature classification. In the first part, curvature scale space is applied to multi-scale analysis and original feature detection. To obtain the primary and secondary curve primitives, feature fusion is realized by multi-scale feature detection information transmission. In the second part: projection height function is presented based on the area of quadrilateral, which improved criterions of sectional curve feature classification. Results of synthetic curves and practical scanned sectional curves are given to illustrate the efficiency of the proposed algorithm on feature extraction. The consistence between feature extraction based on multi-scale curvature analysis and curve primitives is verified.

  16. Lung Texture in Serial Thoracic Computed Tomography Scans: Correlation of Radiomics-based Features With Radiation Therapy Dose and Radiation Pneumonitis Development

    Energy Technology Data Exchange (ETDEWEB)

    Cunliffe, Alexandra; Armato, Samuel G. [Department of Radiology, The University of Chicago, Chicago, Illinois (United States); Castillo, Richard [Department of Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas (United States); Pham, Ngoc [Baylor College of Medicine, Houston, Texas (United States); Guerrero, Thomas [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Al-Hallaq, Hania A., E-mail: hal-hallaq@radonc.uchicago.edu [Department of Radiation and Cellular Oncology, The University of Chicago, Chicago, Illinois (United States)

    2015-04-01

    Purpose: To assess the relationship between radiation dose and change in a set of mathematical intensity- and texture-based features and to determine the ability of texture analysis to identify patients who develop radiation pneumonitis (RP). Methods and Materials: A total of 106 patients who received radiation therapy (RT) for esophageal cancer were retrospectively identified under institutional review board approval. For each patient, diagnostic computed tomography (CT) scans were acquired before (0-168 days) and after (5-120 days) RT, and a treatment planning CT scan with an associated dose map was obtained. 32- × 32-pixel regions of interest (ROIs) were randomly identified in the lungs of each pre-RT scan. ROIs were subsequently mapped to the post-RT scan and the planning scan dose map by using deformable image registration. The changes in 20 feature values (ΔFV) between pre- and post-RT scan ROIs were calculated. Regression modeling and analysis of variance were used to test the relationships between ΔFV, mean ROI dose, and development of grade ≥2 RP. Area under the receiver operating characteristic curve (AUC) was calculated to determine each feature's ability to distinguish between patients with and those without RP. A classifier was constructed to determine whether 2- or 3-feature combinations could improve RP distinction. Results: For all 20 features, a significant ΔFV was observed with increasing radiation dose. Twelve features changed significantly for patients with RP. Individual texture features could discriminate between patients with and those without RP with moderate performance (AUCs from 0.49 to 0.78). Using multiple features in a classifier, AUC increased significantly (0.59-0.84). Conclusions: A relationship between dose and change in a set of image-based features was observed. For 12 features, ΔFV was significantly related to RP development. This study demonstrated the ability of radiomics to provide a quantitative, individualized

  17. Parameter analysis of texture feature in oil spill detection based on SAR%基于合成孔径雷达影像的海洋溢油纹理特征参数分析

    Institute of Scientific and Technical Information of China (English)

    魏铼; 胡卓玮

    2013-01-01

    objects with same spectrum or similar roughness. Hence, texture information is combineal with the traditional image information to improve the extraction accuracy of oil spill. In the process of texture analysis, there are many parameters which will directly affect the extraction accuracy. So it is important to select appropriate parameters. In this paper, we choose three SAR images in the same orbit which covered the Bohai Sea area in 2006 as data resource, and use method based on gray level co-occurrence matrix (GLCM) to analyze texture feature. Because GLCM-based texture analysis method can perceive the surface of image well and describe the texture feature in detail by gray correlation of pixels, it is more suitable for marine oil spill detection in SAR images. Then we discuss, experiment, select and verify the parameters of texture analysis. Finally, this paper selected four parameters include local stationary, non-similarity, contrast and change as the texture feature statistics, determined the value of these parameters, used neural network classification considered both texture feature and backscattering coefficient of SAR. With the classification accuracy up to 80. 65 % , the method combined the traditional information with the texture information to extract oil spill turn out to be feasible and effective and also laid a good foundation for the future study on marine oil spill detection.

  18. Compressive sensing-based feature extraction for bearing fault diagnosis using a heuristic neural network

    Science.gov (United States)

    Yuan, Haiying; Wang, Xiuyu; Sun, Xun; Ju, Zijian

    2017-06-01

    Bearing fault diagnosis collects massive amounts of vibration data about a rotating machinery system, whose fault classification largely depends on feature extraction. Features reflecting bearing work states are directly extracted using time-frequency analysis of vibration signals, which leads to high dimensional feature data. To address the problem of feature dimension reduction, a compressive sensing-based feature extraction algorithm is developed to construct a concise fault feature set. Next, a heuristic PSO-BP neural network, whose learning process perfectly combines particle swarm optimization and the Levenberg-Marquardt algorithm, is constructed for fault classification. Numerical simulation experiments are conducted on four datasets sampled under different severity levels and load conditions, which verify that the proposed fault diagnosis method achieves efficient feature extraction and high classification accuracy.

  19. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  20. A Scheme of sEMG Feature Extraction for Improving Myoelectric Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    Shuai Ding; Liang Wang

    2016-01-01

    This paper proposed a feature extraction scheme based on sparse representation considering the non⁃stationary property of surface electromyography ( sEMG ) . Sparse Bayesian Learning ( SBL ) algorithm was introduced to extract the feature with optimal class separability to improve recognition accuracies of multi⁃movement patterns. The SBL algorithm exploited the compressibility ( or weak sparsity) of sEMG signal in some transformed domains. The proposed feature extracted by using the SBL algorithm was named SRC. The feature SRC represented time⁃varying characteristics of sEMG signal very effectively. We investigated the effect of the feature SRC by comparing with other fourteen individual features and eighteen multi⁃feature sets in offline recognition. The results demonstrated the feature SRC revealed the important dynamic information in the sEMG signals. And the multi⁃feature sets formed by the feature SRC and other single features yielded more superior performance on recognition accuracy. The best average recognition accuracy of 91. 67% was gained by using SVM classifier with the multi⁃feature set combining the feature SRC and the feature wavelength ( WL ) . The proposed feature extraction scheme is promising for multi⁃movement recognition with high accuracy.

  1. Unsupervised texture image segmentation using multilayer data condensation spectral clustering

    Science.gov (United States)

    Liu, Hanqiang; Jiao, Licheng; Zhao, Feng

    2010-07-01

    A novel unsupervised texture image segmentation using a multilayer data condensation spectral clustering algorithm is presented. First, the texture features of each image pixel are extracted by the stationary wavelet transform and a multilayer data condensation method is performed on this texture features data set to obtain a condensation subset. Second, the spectral clustering algorithm based on the manifold similarity measure is used to cluster the condensation subset. Finally, according to the clustering result of the condensation subset, the nearest-neighbor method is adopted to obtain the original image-segmentation result. In the experiments, we apply our method to solve the texture and synthetic aperture radar image segmentation and take self-tuning k-nearest-neighbor spectral clustering and Nyström methods for baseline comparisons. The experimental results show that the proposed method is more robust and effective for texture image segmentation.

  2. Feature curve extraction from point clouds via developable strip intersection

    Directory of Open Access Journals (Sweden)

    Kai Wah Lee

    2016-04-01

    Full Text Available In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.

  3. Improved Dictionary Formation and Search for Synthetic Aperture Radar Canonical Shape Feature Extraction

    Science.gov (United States)

    2014-03-27

    IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Matthew P. Crosser, Captain, USAF... SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-14-M-21 IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL

  4. 基于纹理特征与BP神经网络的运动车辆识别%Motor Vehicle Identification Based on Texture Feature and BP Neural Network

    Institute of Scientific and Technical Information of China (English)

    张秀林; 王浩全; 刘玉; 安然

    2013-01-01

    在Gabor小波滤波器组与图像卷积值作为特征向量达到很高识别率的基础上,提出了一种特征值加权的Gabor小波纹理特征的提取方法.首先Gabor小波函数与纹理图像做卷积,然后加权处理尺度各不相同和方向各不相同的的卷积值,最后将均值和方差看作它们的特征向量,该方法使特征维数有所降低,并利用BP神经网络进行训练和仿真,实现运动车辆纹理图像的自动分类,达到运动图像的识别.实验结果表明此算法有效降低了图像的识别错误,增强了稳健性,对质量差的图像能够有效识别.%On the basis of the Gabor wavelet filter group and the image convolution values as the feature vector can achieve a high recognition rate,a feature-weighted method of extracting texture is proposed.Firstly,Gabor wavelet function and texture image deconvolution.Then,the convolution values are extracted in different scales and different directions.After making the weighting process,taking its mean and variance as the characteristic vector,which greatly reduces the feature dimension.Finally,BP neural network is used to making training and simulation,in order to achieving the automatic classification of texture images of moving vehicles and the identification of moving images.The experimental results show that this algorithm can effectively reduce the recognition error of the image and enhance the robustness.To the poor quality images,it can make the effective recognition.

  5. 2D-HIDDEN MARKOV MODEL FEATURE EXTRACTION STRATEGY OF ROTATING MACHINERY FAULT DIAGNOSIS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new feature extraction method based on 2D-hidden Markov model(HMM) is proposed.Meanwhile the time index and frequency index are introduced to represent the new features. The new feature extraction strategy is tested by the experimental data that collected from Bently rotor experiment system. The results show that this methodology is very effective to extract the feature of vibration signals in the rotor speed-up course and can be extended to other non-stationary signal analysis fields in the future.

  6. Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance

    Science.gov (United States)

    Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu

    Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.

  7. Feature Extraction and Spatial Interpolation for Improved Wireless Location Sensing

    Directory of Open Access Journals (Sweden)

    Chris Rizos

    2008-04-01

    Full Text Available This paper proposes a new methodology to improve location-sensing accuracy in wireless network environments eliminating the effects of non-line-of-sight errors. After collecting bulks of anonymous location measurements from a wireless network, the preparation stage of the proposed methodology begins. Investigating the collected location measurements in terms of signal features and geometric features, feature locations are identified. After the identification of feature locations, the non-line-of-sight error correction maps are generated. During the real-time location sensing stage, each user can request localization with a set of location measurements. With respected to the reported measurements, the pre-computed correction maps are applied. As a result, localization accuracy improves by eliminating the non-line-of-sight errors. A simulation result, assuming a typical dense urban environment, demonstrates the benefits of the proposed location sensing methodology.

  8. Comparing the role of shape and texture on staging hepatic fibrosis from medical imaging

    Science.gov (United States)

    Zhang, Xuejun; Louie, Ryan; Liu, Brent J.; Gao, Xin; Tan, Xiaomin; Qu, Xianghe; Long, Liling

    2016-03-01

    The purpose of this study is to investigate the role of shape and texture in the classification of hepatic fibrosis by selecting the optimal parameters for a better Computer-aided diagnosis (CAD) system. 10 surface shape features are extracted from a standardized profile of liver; while15 texture features calculated from gray level co-occurrence matrix (GLCM) are extracted within an ROI in liver. Each combination of these input subsets is checked by using support vector machine (SVM) with leave-one-case-out method to differentiate fibrosis into two groups: normal or abnormal. The accurate rate value of all 10/15 types number of features is 66.83% by texture, while 85.74% by shape features, respectively. The irregularity of liver shape can demonstrate fibrotic grade efficiently and texture feature of CT image is not recommended to use with shape feature for interpretation of cirrhosis.

  9. Combination of heterogeneous EEG feature extraction methods and stacked sequential learning for sleep stage classification.

    Science.gov (United States)

    Herrera, L J; Fernandes, C M; Mora, A M; Migotina, D; Largo, R; Guillen, A; Rosa, A C

    2013-06-01

    This work proposes a methodology for sleep stage classification based on two main approaches: the combination of features extracted from electroencephalogram (EEG) signal by different extraction methods, and the use of stacked sequential learning to incorporate predicted information from nearby sleep stages in the final classifier. The feature extraction methods used in this work include three representative ways of extracting information from EEG signals: Hjorth features, wavelet transformation and symbolic representation. Feature selection was then used to evaluate the relevance of individual features from this set of methods. Stacked sequential learning uses a second-layer classifier to improve the classification by using previous and posterior first-layer predicted stages as additional features providing information to the model. Results show that both approaches enhance the sleep stage classification accuracy rate, thus leading to a closer approximation to the experts' opinion.

  10. Performance Comparison between Different Feature Extraction Techniques with SVM Using Gurumukhi Script

    Directory of Open Access Journals (Sweden)

    Sandeep Dangi,

    2014-07-01

    Full Text Available This paper represent the offline handwritten character recognition for Gurumukhi script. It is a major script of india. Many work has been done in many languages such as English , Chinese , Devanagri , Tamil etc. Gurumukhi is a script of Punjabi Language which is widely spoken across the globe. In this paper focus on better character recognition accuracy. The dataset include 7000 samples collected in different writing styles. These dataset divided in two set Training and Test. For Training set collect 5600 samples and 1400 as test set. The evaluated feature extraction include: Distance Profile, Diagonal feature and BDD(Background Direction Distribution. These features were classified by using SVM classifier. The Performance comparison have been made using one classifier with different feature extraction techniques. The experiment show that Diagonal feature extraction method has achieved highest recognition accuracy 95.39% than other features extraction method.

  11. Comparison of half and full-leaf shape feature extraction for leaf classification

    Science.gov (United States)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  12. Correlation of pretreatment {sup 18}F-FDG PET tumor textural features with gene expression in pharyngeal cancer and implications for radiotherapy-based treatment outcomes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shang-Wen [China Medical University Hospital, Department of Radiation Oncology, Taichung (China); China Medical University, School of Medicine, Taichung (China); Taipei Medical University, School of Medicine, Taipei (China); China Medical University, Graduate Institute of Clinical Medical Science, School of Medicine, College of Medicine, Taichung (China); Shen, Wei-Chih [China Medical University, Cancer Center and Department of Medical Research, China Medical University Hospital, Taichung (China); Asia University, Department of Computer Science and Information Engineering, Taichung (China); Lin, Ying-Chun [China Medical University Hospital,