WorldWideScience

Sample records for image feature extraction

  1. Tongue Image Feature Extraction in TCM

    Institute of Scientific and Technical Information of China (English)

    LI Dong; DU Lian-xiang; LU Fu-ping; DU Jun-ping

    2004-01-01

    In this paper, digital image processing and computer vision techniques are applied to study tongue images for feature extraction with VC++ and Matlab. Extraction and analysis of the tongue surface features are based on shape, color, edge, and texture. The developed software has various functions and good user interface and is easy to use. Feature data for tongue image pattern recognition is provided, which form a sound basis for the future tongue image recognition.

  2. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  3. Medical Image Feature, Extraction, Selection And Classification

    Directory of Open Access Journals (Sweden)

    M.VASANTHA,

    2010-06-01

    Full Text Available Breast cancer is the most common type of cancer found in women. It is the most frequent form of cancer and one in 22 women in India is likely to suffer from breast cancer. This paper proposes a image classifier to classify the mammogram images. Mammogram image is classified into normal image, benign image and malignant image. Totally 26 features including histogram intensity features and GLCM features are extracted from mammogram image. A hybrid approach of feature selection is proposed in this paper which reduces 75% of the features. Decision tree algorithms are applied to mammography lassification by using these reduced features. Experimental results have been obtained for a data set of 113 images taken from MIAS of different types. This technique of classification has not been attempted before and it reveals the potential of Data mining in medical treatment.

  4. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  5. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  6. Remote Sensing Image Feature Extracting Based Multiple Ant Colonies Cooperation

    Directory of Open Access Journals (Sweden)

    Zhang Zhi-long

    2014-02-01

    Full Text Available This paper presents a novel feature extraction method for remote sensing imagery based on the cooperation of multiple ant colonies. First, multiresolution expression of the input remote sensing imagery is created, and two different ant colonies are spread on different resolution images. The ant colony in the low-resolution image uses phase congruency as the inspiration information, whereas that in the high-resolution image uses gradient magnitude. The two ant colonies cooperate to detect features in the image by sharing the same pheromone matrix. Finally, the image features are extracted on the basis of the pheromone matrix threshold. Because a substantial amount of information in the input image is used as inspiration information of the ant colonies, the proposed method shows higher intelligence and acquires more complete and meaningful image features than those of other simple edge detectors.

  7. Retinal image analysis: preprocessing and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Marrugo, Andres G; Millan, Maria S, E-mail: andres.marrugo@upc.edu [Grup d' Optica Aplicada i Processament d' Imatge, Departament d' Optica i Optometria Univesitat Politecnica de Catalunya (Spain)

    2011-01-01

    Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

  8. THE IDENTIFICATION OF PILL USING FEATURE EXTRACTION IN IMAGE MINING

    Directory of Open Access Journals (Sweden)

    A. Hema

    2015-02-01

    Full Text Available With the help of image mining techniques, an automatic pill identification system was investigated in this study for matching the images of the pills based on its several features like imprint, color, size and shape. Image mining is an inter-disciplinary task requiring expertise from various fields such as computer vision, image retrieval, image matching and pattern recognition. Image mining is the method in which the unusual patterns are detected so that both hidden and useful data images can only be stored in large database. It involves two different approaches for image matching. This research presents a drug identification, registration, detection and matching, Text, color and shape extraction of the image with image mining concept to identify the legal and illegal pills with more accuracy. Initially, the preprocessing process is carried out using novel interpolation algorithm. The main aim of this interpolation algorithm is to reduce the artifacts, blurring and jagged edges introduced during up-sampling. Then the registration process is proposed with two modules they are, feature extraction and corner detection. In feature extraction the noisy high frequency edges are discarded and relevant high frequency edges are selected. The corner detection approach detects the high frequency pixels in the intersection points. Through the overall performance gets improved. There is a need of segregate the dataset into groups based on the query image’s size, shape, color, text, etc. That process of segregating required information is called as feature extraction. The feature extraction is done using Geometrical Gradient feature transformation. Finally, color and shape feature extraction were performed using color histogram and geometrical gradient vector. Simulation results shows that the proposed techniques provide accurate retrieval results both in terms of time and accuracy when compared to conventional approaches.

  9. Feature extraction for an image retrieving scheme

    OpenAIRE

    Fuertes García, José Manuel; Lucena López, Manuel José; Pérez de la Blanca Capilla, Nicolás; Fernández Valdivia, Joaquín

    1999-01-01

    In this paper we present two basic modules whithin the designed scheme for retrieving images of a database from the object colour and shape in the scenes. On the one hand, we desing a new method to detect edges in colour images. We offer a new approach to the perceptual space (H,S,I) (an Uniform Chromatic Scale space) About wich we describe its properties as well as the metric to work in it. On the other hand, we develop an information simplifying process to form a graphic structure en wich t...

  10. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  11. Feature extraction with LIDAR data and aerial images

    Science.gov (United States)

    Mao, Jianhua; Liu, Yanjing; Cheng, Penggen; Li, Xianhua; Zeng, Qihong; Xia, Jing

    2006-10-01

    Raw LIDAR data is a irregular spacing 3D point cloud including reflections from bare ground, buildings, vegetation and vehicles etc., and the first task of the data analyses of point cloud is feature extraction. However, the interpretability of LIDAR point cloud is often limited due to the fact that no object information is provided, and the complex earth topography and object morphology make it impossible for a single operator to classify all the point cloud precisely 100%. In this paper, a hierarchy method for feature extraction with LIDAR data and aerial images is discussed. The aerial images provide us information of objects figuration and spatial distribution, and hierarchic classification of features makes it easy to apply automatic filters progressively. And the experiment results show that, using this method, it was possible to detect more object information and get a better result of feature extraction than using automatic filters alone.

  12. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  13. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  14. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  15. GPU Accelerated Automated Feature Extraction From Satellite Images

    Directory of Open Access Journals (Sweden)

    K. Phani Tejaswi

    2013-04-01

    Full Text Available The availability of large volumes of remote sensing data insists on higher degree of automation in featureextraction, making it a need of thehour. Fusingdata from multiple sources, such as panchromatic,hyperspectraland LiDAR sensors, enhances the probability of identifying and extracting features such asbuildings, vegetation or bodies of water by using a combination of spectral and elevation characteristics.Utilizing theaforementioned featuresin remote sensing is impracticable in the absence ofautomation.Whileefforts are underway to reduce human intervention in data processing, this attempt alone may notsuffice. Thehuge quantum of data that needs to be processed entailsaccelerated processing to be enabled.GPUs, which were originally designed to provide efficient visualization,arebeing massively employed forcomputation intensive parallel processing environments. Image processing in general and hence automatedfeatureextraction, is highly computation intensive, where performance improvements have a direct impacton societal needs. In this context, an algorithm has been formulated for automated feature extraction froma panchromatic or multispectral image based on image processing techniques.Two Laplacian of Guassian(LoGmasks were applied on the image individually followed by detection of zero crossing points andextracting the pixels based on their standard deviationwiththe surrounding pixels. The two extractedimages with different LoG masks were combined together which resulted in an image withthe extractedfeatures and edges.Finally the user is at liberty to apply the image smoothing step depending on the noisecontent in the extracted image.The image ispassed through a hybrid median filter toremove the salt andpepper noise from the image.This paper discusses theaforesaidalgorithmforautomated featureextraction, necessity of deployment of GPUs for thesame;system-level challenges and quantifies thebenefits of integrating GPUs in such environment. The

  16. TOPOGRAPHIC FEATURE EXTRACTION FOR BENGALI AND HINDI CHARACTER IMAGES

    Directory of Open Access Journals (Sweden)

    Soumen Bag

    2011-06-01

    Full Text Available Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West. We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shapebased graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi character images. Initial results demonstrate the efficacy of our approach.

  17. Topographic Feature Extraction for Bengali and Hindi Character Images

    Directory of Open Access Journals (Sweden)

    Soumen Bag

    2011-09-01

    Full Text Available Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West. We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shapebased graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi character images. Initial results demonstrate the efficacy of our approach.

  18. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    Science.gov (United States)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  19. New learning subspace method for image feature extraction

    Institute of Scientific and Technical Information of China (English)

    CAO Jian-hai; LI Long; LU Chang-hou

    2006-01-01

    A new method of Windows Minimum/Maximum Module Learning Subspace Algorithm(WMMLSA) for image feature extraction is presented. The WMMLSM is insensitive to the order of the training samples and can regulate effectively the radical vectors of an image feature subspace through selecting the study samples for subspace iterative learning algorithm,so it can improve the robustness and generalization capacity of a pattern subspace and enhance the recognition rate of a classifier. At the same time,a pattern subspace is built by the PCA method. The classifier based on WMMLSM is successfully applied to recognize the pressed characters on the gray-scale images. The results indicate that the correct recognition rate on WMMLSM is higher than that on Average Learning Subspace Method,and that the training speed and the classification speed are both improved. The new method is more applicable and efficient.

  20. Feature extraction for target identification and image classification of OMIS hyperspectral image

    Institute of Scientific and Technical Information of China (English)

    DU Pei-jun; TAN Kun; SU Hong-jun

    2009-01-01

    In order to combine feature extraction operations with specific hyperspectrai remote sensing information processing objectives, two aspects of feature extraction were explored. Based on clustering and decision tree algorithm, spectral absorption index (SAI), continuum-removal and derivative spectral analysis were employed to discover characterized spectral features of dif-ferent targets, and decision trees for identifying a specific class and discriminating different classes were generated. By combining support vector machine (SVM) classifier with different feature extraction strategies including principal component analysis (PCA), minimum noise fraction (MNF), grouping PCA, and derivate spectral analysis, the performance of feature extraction approaches in classification was evaluated. The results show that feature extraction by PCA and derivate spectral analysis are effective to OMIS (operational modular imaging spectrometer) image classification using SVM, and SVM outperforms traditional SAM and MLC classifiers for OMIS data.

  1. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  2. Texture features analysis for coastline extraction in remotely sensed images

    Science.gov (United States)

    De Laurentiis, Raimondo; Dellepiane, Silvana G.; Bo, Giancarlo

    2002-01-01

    The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.

  3. Topographic Feature Extraction for Bengali and Hindi Character Images

    CERN Document Server

    Bag, Soumen; 10.5121/sipij.2011.2215

    2011-01-01

    Feature selection and extraction plays an important role in different classification based problems such as face recognition, signature verification, optical character recognition (OCR) etc. The performance of OCR highly depends on the proper selection and extraction of feature set. In this paper, we present novel features based on the topography of a character as visible from different viewing directions on a 2D plane. By topography of a character we mean the structural features of the strokes and their spatial relations. In this work we develop topographic features of strokes visible with respect to views from different directions (e.g. North, South, East, and West). We consider three types of topographic features: closed region, convexity of strokes, and straight line strokes. These features are represented as a shape-based graph which acts as an invariant feature set for discriminating very similar type characters efficiently. We have tested the proposed method on printed and handwritten Bengali and Hindi...

  4. Image feature meaning for automatic key-frame extraction

    Science.gov (United States)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  5. Parallel Feature Extraction System

    Institute of Scientific and Technical Information of China (English)

    MAHuimin; WANGYan

    2003-01-01

    Very high speed image processing is needed in some application specially for weapon. In this paper, a high speed image feature extraction system with parallel structure was implemented by Complex programmable logic device (CPLD), and it can realize image feature extraction in several microseconds almost with no delay. This system design is presented by an application instance of flying plane, whose infrared image includes two kinds of feature: geometric shape feature in the binary image and temperature-feature in the gray image. Accordingly the feature extraction is taken on the two kind features. Edge and area are two most important features of the image. Angle often exists in the connection of the different parts of the target's image, which indicates that one area ends and the other area begins. The three key features can form the whole presentation of an image. So this parallel feature extraction system includes three processing modules: edge extraction, angle extraction and area extraction. The parallel structure is realized by a group of processors, every detector is followed by one route of processor, every route has the same circuit form, and works together at the same time controlled by a set of clock to realize feature extraction. The extraction system has simple structure, small volume, high speed, and better stability against noise. It can be used in the war field recognition system.

  6. Feature Extraction with Ordered Mean Values for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

  7. The fuzzy Hough Transform-feature extraction in medical images

    Energy Technology Data Exchange (ETDEWEB)

    Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B. (Univ. of Iowa, Iowa City, IA (United States)); McPherson, D.D.; Gotteiner, N.L. (Northwestern Univ., Chicago, IL (United States). Dept. of Internal Medicine)

    1994-06-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications.

  8. A New Method of Semantic Feature Extraction for Medical Images Data

    Institute of Scientific and Technical Information of China (English)

    XIE Conghua; SONG Yuqing; CHANG Jinyi

    2006-01-01

    In order to overcome the disadvantages of color, shape and texture-based features definition for medical images, this paper defines a new kind of semantic feature and its extraction algorithm. We firstly use kernel density estimation statistical model to describe the complicated medical image data, secondly, define some typical representative pixels of images as feature and finally, take hill-climbing strategy of Artificial Intelligence to extract those semantic features. Results of a content-based medial image retrieve system show that our semantic features have better distinguishing ability than those color, shape and texture-based features and can improve the ratios of recall and precision of this system smartly.

  9. The fuzzy Hough transform-feature extraction in medical images.

    Science.gov (United States)

    Philip, K P; Dove, E L; McPherson, D D; Gotteiner, N L; Stanford, W; Chandran, K B

    1994-01-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final (improved) estimate of the true borders with other (subsequently used) image processing techniques. They present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough transform algorithm as part of a larger procedure to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications.

  10. Iris image enhancement for feature recognition and extraction

    CSIR Research Space (South Africa)

    Mabuza, GP

    2012-10-01

    Full Text Available Gonzalez, R.C. and Woods, R.E. 2002. Digital Image Processing 2nd Edition, Instructor?s manual .Englewood Cliffs, Prentice Hall, pp 17-36. Proen?a, H. and Alexandre, L.A. 2007. Toward Noncooperative Iris Recognition: A classification approach using... for performing such tasks and yielding better accuracy (Gonzalez & Woods, 2002). METHODOLOGY The block diagram in Figure 2 demonstrates the processes followed to achieve the results. Figure 2: Methodology flow chart Iris image enhancement for feature...

  11. STATISTICAL PROBABILITY BASED ALGORITHM FOR EXTRACTING FEATURE POINTS IN 2-DIMENSIONAL IMAGE

    Institute of Scientific and Technical Information of China (English)

    Guan Yepeng; Gu Weikang; Ye Xiuqing; Liu Jilin

    2004-01-01

    An algorithm for automatically extracting feature points is developed after the area of feature points in 2-dimensional (2D) imagebeing located by probability theory, correlated methods and criterion for abnormity. Feature points in 2D image can be extracted only by calculating standard deviation of gray within sampled pixels area in our approach statically. While extracting feature points, the limitation to confirm threshold by tentative method according to some a priori information on processing image can be avoided. It is proved that the proposed algorithm is valid and reliable by extracting feature points on actual natural images with abundant and weak texture, including multi-object with complex background, respectively. It can meet the demand of extracting feature points of 2D image automatically in machine vision system.

  12. Fingerprint Feature Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Mehala. G

    2014-03-01

    Full Text Available The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS. FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extracting true minutiae.

  13. Medical Image Fusion Based on Feature Extraction and Sparse Representation.

    Science.gov (United States)

    Fei, Yin; Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

  14. FEATURE EXTRACTION OF RETINAL IMAGE FOR DIAGNOSIS OF ABNORMAL EYES

    Directory of Open Access Journals (Sweden)

    S. Praveenkumar

    2011-05-01

    Full Text Available Currently, medical image processing draws intense interests of scien- tists and physicians to aid in clinical diagnosis. The retinal Fundus image is widely used in the diagnosis and treatment of various eye diseases such as Diabetic Retinopathy, glaucoma etc. If these diseases are detected and treated early, many of the visual losses can be pre- vented. This paper presents the methods to detect main features of Fundus images such as optic disk, fovea, exudates and blood vessels. To determine the optic Disk and its centre we find the brightest part of the Fundus. The candidate region of fovea is defined an area circle. The detection of fovea is done by using its spatial relationship with optic disk. Exudates are found using their high grey level variation and their contours are determined by means of morphological recon- struction techniques. The blood vessels are highlighted using bottom hat transform and morphological dilation after edge detection. All the enhanced features are then combined in the Fundus image for the detection of abnormalities in eye.

  15. Automatic extraction of disease-specific features from Doppler images

    Science.gov (United States)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  16. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  17. [Classification technique for hyperspectral image based on subspace of bands feature extraction and LS-SVM].

    Science.gov (United States)

    Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian

    2011-05-01

    The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.

  18. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    Science.gov (United States)

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  19. A Novel Feature Extraction Scheme for Medical X-Ray Images

    OpenAIRE

    Prachi.G.Bhende; Dr.A.N.Cheeran

    2016-01-01

    X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform reliable matching between different views of an obje...

  20. Feature extraction for the analysis of colon status from the endoscopic images

    Directory of Open Access Journals (Sweden)

    Krishnan Shankar M

    2003-04-01

    Full Text Available Abstract Background Extracting features from the colonoscopic images is essential for getting the features, which characterizes the properties of the colon. The features are employed in the computer-assisted diagnosis of colonoscopic images to assist the physician in detecting the colon status. Methods Endoscopic images contain rich texture and color information. Novel schemes are developed to extract new texture features from the texture spectra in the chromatic and achromatic domains, and color features for a selected region of interest from each color component histogram of the colonoscopic images. These features are reduced in size using Principal Component Analysis (PCA and are evaluated using Backpropagation Neural Network (BPNN. Results Features extracted from endoscopic images were tested to classify the colon status as either normal or abnormal. The classification results obtained show the features' capability for classifying the colon's status. The average classification accuracy, which is using hybrid of the texture and color features with PCA (τ = 1%, is 97.72%. It is higher than the average classification accuracy using only texture (96.96%, τ = 1% or color (90.52%, τ = 1% features. Conclusion In conclusion, novel methods for extracting new texture- and color-based features from the colonoscopic images to classify the colon status have been proposed. A new approach using PCA in conjunction with BPNN for evaluating the features has also been proposed. The preliminary test results support the feasibility of the proposed method.

  1. Edge-Based Feature Extraction Method and Its Application to Image Retrieval

    Directory of Open Access Journals (Sweden)

    G. Ohashi

    2003-10-01

    Full Text Available We propose a novel feature extraction method for content-bases image retrieval using graphical rough sketches. The proposed method extracts features based on the shape and texture of objects. This edge-based feature extraction method functions by representing the relative positional relationship between edge pixels, and has the advantage of being shift-, scale-, and rotation-invariant. In order to verify its effectiveness, we applied the proposed method to 1,650 images obtained from the Hamamatsu-city Museum of Musical Instruments and 5,500 images obtained from Corel Photo Gallery. The results verified that the proposed method is an effective tool for achieving accurate retrieval.

  2. Image mining and Automatic Feature extraction from Remotely Sensed Image (RSI using Cubical Distance Methods

    Directory of Open Access Journals (Sweden)

    S.Sasikala

    2013-04-01

    Full Text Available Information processing and decision support system using image mining techniques is in advance drive with huge availability of remote sensing image (RSI. RSI describes inherent properties of objects by recording their natural reflectance in the electro-magnetic spectral (ems region. Information on such objects could be gathered by their color properties or their spectral values in various ems range in the form of pixels. Present paper explains a method of such information extraction using cubical distance method and subsequent results. Thismethod is one among the simpler in its approach and considers grouping of pixels on the basis of equal distance from a specified point in the image or selected pixel having definite attribute values (DN in different spectral layers of the RSI. The color distance and the occurrence pixel distance play a vital role in determining similarobjects as clusters aid in extracting features in the RSI domain.

  3. Fingerprint Feature Extraction Algorithm

    OpenAIRE

    Mehala. G

    2014-01-01

    The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE) algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS). FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extractin...

  4. An Adequate Approach to Image Retrieval Based on Local Level Feature Extraction

    Directory of Open Access Journals (Sweden)

    Sumaira Muhammad Hayat Khan

    2010-10-01

    Full Text Available Image retrieval based on text annotation has become obsolete and is no longer interesting for scientists because of its high time complexity and low precision in results. Alternatively, increase in the amount of digital images has generated an excessive need for an accurate and efficient retrieval system. This paper proposes content based image retrieval technique at a local level incorporating all the rudimentary features. Image undergoes the segmentation process initially and each segment is then directed to the feature extraction process. The proposed technique is also based on image?s content which primarily includes texture, shape and color. Besides these three basic features, FD (Fourier Descriptors and edge histogram descriptors are also calculated to enhance the feature extraction process by taking hold of information at the boundary. Performance of the proposed method is found to be quite adequate when compared with the results from one of the best local level CBIR (Content Based Image Retrieval techniques.

  5. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  6. Feature-point-extracting-based automatically mosaic for composite microscopic images

    Institute of Scientific and Technical Information of China (English)

    YIN YanSheng; ZHAO XiuYang; TIAN XiaoFeng; LI Jia

    2007-01-01

    Image mosaic is a crucial step in the three-dimensional reconstruction of composite materials to align the serial images. A novel method is adopted to mosaic two SiC/Al microscopic images with an amplification coefficient of 1000. The two images are denoised by Gaussian model, and feature points are then extracted by using Harris corner detector. The feature points are filtered through Canny edge detector. A 40x40 feature template is chosen by sowing a seed in an overlapped area of the reference image, and the homologous region in floating image is acquired automatically by the way of correlation analysis. The feature points in matched templates are used as feature point-sets. Using the transformational parameters acquired by SVD-ICP method, the two images are transformed into the universal coordinates and merged to the final mosaic image.

  7. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image

    Science.gov (United States)

    Xi, Wenfei; Shi, Zhengtao; Li, Dongsheng

    2017-07-01

    Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV), this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.

  8. A Fast Feature Extraction Method Based on Integer Wavelet Transform for Hyperspectral Images

    Institute of Scientific and Technical Information of China (English)

    GUYanfeng; ZHANGYe; YUShanshan

    2004-01-01

    Hyperspectral remote sensing provides high-resolution spectral data and the potential for remote discrimination between subtle differences in ground covers. However, the high-dimensional data space generated by the hyperspectral sensors creates a new challenge for conventional spectral data analysis techniques. A challenging problem in using hyperspectral data is to eliminate redundancy and preserve useful spectral information for applications. In this paper, a Fast feature extraction (FFE) method based on integer wavelet transform is proposed to extract useful features and reduce dimensionality of hyperspectral images. The FFE method can be directly used to extract useful features from spectral vector of each pixel resident in the hyperspectral images. The FFE method has two main merits: high computational efficiency and good ability to extract spectral features. In order to better testify the effectiveness and the performance of the proposed method, classification experiments of hyperspectral images are performed on two groups of AVIRIS (Airborne visible/infrared imaging spectrometer) data respectively. In addition, three existing methods for feature extraction of hyperspectral images, i.e. PCA, SPCT and Wavelet Transform, are performed on the same data for comparison with the proposed method. The experimental investigation shows that the efficiency of the FFE method for feature extraction outclasses those of the other three methods mentioned above.

  9. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  10. Image Analysis of Soil Micromorphology: Feature Extraction, Segmentation, and Quality Inference

    Directory of Open Access Journals (Sweden)

    Petros Maragos

    2004-06-01

    Full Text Available We present an automated system that we have developed for estimation of the bioecological quality of soils using various image analysis methodologies. Its goal is to analyze soilsection images, extract features related to their micromorphology, and relate the visual features to various degrees of soil fertility inferred from biochemical characteristics of the soil. The image methodologies used range from low-level image processing tasks, such as nonlinear enhancement, multiscale analysis, geometric feature detection, and size distributions, to object-oriented analysis, such as segmentation, region texture, and shape analysis.

  11. [Research on non-rigid medical image registration algorithm based on SIFT feature extraction].

    Science.gov (United States)

    Wang, Anna; Lu, Dan; Wang, Zhe; Fang, Zhizhen

    2010-08-01

    In allusion to non-rigid registration of medical images, the paper gives a practical feature points matching algorithm--the image registration algorithm based on the scale-invariant features transform (Scale Invariant Feature Transform, SIFT). The algorithm makes use of the image features of translation, rotation and affine transformation invariance in scale space to extract the image feature points. Bidirectional matching algorithm is chosen to establish the matching relations between the images, so the accuracy of image registrations is improved. On this basis, affine transform is chosen to complement the non-rigid registration, and normalized mutual information measure and PSO optimization algorithm are also chosen to optimize the registration process. The experimental results show that the method can achieve better registration results than the method based on mutual information.

  12. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  13. Global image feature extraction using slope pattern spectra

    CSIR Research Space (South Africa)

    Toudjeu, IT

    2008-06-01

    Full Text Available of coffee beans. Granulometries were also used to estimate the dominant width of the white patterns in the X-ray images of welds [7]. Due to the computational load associated with the calculation of granulometries, Vincent [6], building on the work...

  14. Spatial and Spectral Nonparametric Linear Feature Extraction Method for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Jinn-Min Yang

    2016-11-01

    Full Text Available Feature extraction (FE or dimensionality reduction (DR plays quite an important role in the field of pattern recognition. Feature extraction aims to reduce the dimensionality of the high-dimensional dataset to enhance the classification accuracy and foster the classification speed, particularly when the training sample size is small, namely the small sample size (SSS problem. Remotely sensed hyperspectral images (HSIs are often with hundreds of measured features (bands which potentially provides more accurate and detailed information for classification, but it generally needs more samples to estimate parameters to achieve a satisfactory result. The cost of collecting ground-truth of remotely sensed hyperspectral scene can be considerably difficult and expensive. Therefore, FE techniques have been an important part for hyperspectral image classification. Unlike lots of feature extraction methods are based only on the spectral (band information of the training samples, some feature extraction methods integrating both spatial and spectral information of training samples show more effective results in recent years. Spatial contexture information has been proven to be useful to improve the HSI data representation and to increase classification accuracy. In this paper, we propose a spatial and spectral nonparametric linear feature extraction method for hyperspectral image classification. The spatial and spectral information is extracted for each training sample and used to design the within-class and between-class scatter matrices for constructing the feature extraction model. The experimental results on one benchmark hyperspectral image demonstrate that the proposed method obtains stable and satisfactory results than some existing spectral-based feature extraction.

  15. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  16. Image quality assessment method based on nonlinear feature extraction in kernel space

    Institute of Scientific and Technical Information of China (English)

    Yong DING‡; Nan LI; Yang ZHAO; Kai HUANG

    2016-01-01

    To match human perception, extracting perceptual features effectively plays an important role in image quality assessment. In contrast to most existing methods that use linear transformations or models to represent images, we employ a complex mathematical expression of high dimensionality to reveal the statistical characteristics of the images. Furthermore, by introducing kernel methods to transform the linear problem into a nonlinear one, a full-reference image quality assessment method is proposed based on high-dimensional nonlinear feature extraction. Experiments on the LIVE, TID2008, and CSIQ databases demonstrate that nonlinear features offer competitive performance for image inherent quality representation and the proposed method achieves a promising performance that is consistent with human subjective evaluation.

  17. Satellite Imagery Cadastral Features Extractions using Image Processing Algorithms: A Viable Option for Cadastral Science

    Directory of Open Access Journals (Sweden)

    Usman Babawuro

    2012-07-01

    Full Text Available Satellite images are used for feature extraction among other functions. They are used to extract linear features, like roads, etc. These linear features extractions are important operations in computer vision. Computer vision has varied applications in photogrammetric, hydrographic, cartographic and remote sensing tasks. The extraction of linear features or boundaries defining the extents of lands, land covers features are equally important in Cadastral Surveying. Cadastral Surveying is the cornerstone of any Cadastral System. A two dimensional cadastral plan is a model which represents both the cadastral and geometrical information of a two dimensional labeled Image. This paper aims at using and widening the concepts of high resolution Satellite imagery data for extracting representations of cadastral boundaries using image processing algorithms, hence minimizing the human interventions. The Satellite imagery is firstly rectified hence establishing the satellite imagery in the correct orientation and spatial location for further analysis. We, then employ the much available Satellite imagery to extract the relevant cadastral features using computer vision and image processing algorithms. We evaluate the potential of using high resolution Satellite imagery to achieve Cadastral goals of boundary detection and extraction of farmlands using image processing algorithms. This method proves effective as it minimizes the human demerits associated with the Cadastral surveying method, hence providing another perspective of achieving cadastral goals as emphasized by the UN cadastral vision. Finally, as Cadastral science continues to look to the future, this research aimed at the analysis and getting insights into the characteristics and potential role of computer vision algorithms using high resolution satellite imagery for better digital Cadastre that would provide improved socio economic development.

  18. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  19. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    Science.gov (United States)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  20. Application of Texture Characteristics for Urban Feature Extraction from Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    D.Shanmukha Rao

    2014-12-01

    Full Text Available Quest of fool proof methods for extracting various urban features from high resolution satellite imagery with minimal human intervention has resulted in developing texture based algorithms. In view of the fact that the textural properties of images provide valuable information for discrimination purposes, it is appropriate to employ texture based algorithms for feature extraction. The Gray Level Co-occurrence Matrix (GLCM method represents a highly efficient technique of extracting second order statistical texture features. The various urban features can be distinguished based on a set of features viz. energy, entropy, homogeneity etc. that characterize different aspects of the underlying texture. As a preliminary step, notable numbers of regions of interests of the urban feature and contrast locations are identified visually. After calculating Gray Level Co-occurrence matrices of these selected regions, the aforementioned texture features are computed. These features can be used to shape a high-dimensional feature vector to carry out content based retrieval. The insignificant features are eliminated to reduce the dimensionality of the feature vector by executing Principal Components Analysis (PCA. The selection of the discriminating features is also aided by the value of Jeffreys-Matusita (JM distance which serves as a measure of class separability Feature identification is then carried out by computing these chosen feature vectors for every pixel of the entire image and comparing it with their corresponding mean values. This helps in identifying and classifying the pixels corresponding to urban feature being extracted. To reduce the commission errors, various index values viz. Soil Adjusted Vegetation Index (SAVI, Normalized Difference Vegetation Index (NDVI and Normalized Difference Water Index (NDWI are assessed for each pixel. The extracted output is then median filtered to isolate the feature of interest after removing the salt and pepper

  1. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    Science.gov (United States)

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2016-09-12

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  2. Comparative Analysis of Feature Extraction Methods for the Classification of Prostate Cancer from TRUS Medical Images

    Directory of Open Access Journals (Sweden)

    Manavalan Radhakrishnan

    2012-01-01

    Full Text Available Diagnosing Prostate cancer is a challenging task for Urologists, Radiologists, and Oncologists. Ultrasound imaging is one of the hopeful techniques used for early detection of prostate cancer. The Region of interest (ROI is identified by different methods after preprocessing. In this paper, DBSCAN clustering with morphological operators is used to extort the prostate region. The evaluation of texture features is important for several image processing applications. The performance of the features extracted from the various texture methods such as histogram, Gray Level Cooccurrence Matrix (GLCM, Gray-Level Run-Length Matrix (GRLM, are analyzed separately. In this paper, it is proposed to combine histogram, GLRLM and GLCM in order to study the performance. The Support Vector Machine (SVM is adopted to classify the extracted features into benign or malignant. The performance of texture methods are evaluated using various statistical parameters such as sensitivity, specificity and accuracy. The comparative analysis has been performed over 5500 digitized TRUS images of prostate.

  3. A new method to extract stable feature points based on self-generated simulation images

    Science.gov (United States)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  4. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  5. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    Science.gov (United States)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  6. A Novel Feature Extraction Scheme for Medical X-Ray Images

    Directory of Open Access Journals (Sweden)

    Prachi.G.Bhende

    2016-02-01

    Full Text Available X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray images belonging to IRMA (Image Retrieval in Medical applications database that can be used to perform reliable matching between different views of an object or scene. GLCM represents the distributions of the intensities and the information about relative positions of neighboring pixels of an image. The LBP features are invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A HOG feature vector represents local shape of an object, having edge information at plural cells. These features have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.

  7. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  8. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  9. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  10. Driver Fatigue Features Extraction

    Directory of Open Access Journals (Sweden)

    Gengtian Niu

    2014-01-01

    Full Text Available Driver fatigue is the main cause of traffic accidents. How to extract the effective features of fatigue is important for recognition accuracy and traffic safety. To solve the problem, this paper proposes a new method of driver fatigue features extraction based on the facial image sequence. In this method, first, each facial image in the sequence is divided into nonoverlapping blocks of the same size, and Gabor wavelets are employed to extract multiscale and multiorientation features. Then the mean value and standard deviation of each block’s features are calculated, respectively. Considering the facial performance of human fatigue is a dynamic process that developed over time, each block’s features are analyzed in the sequence. Finally, Adaboost algorithm is applied to select the most discriminating fatigue features. The proposed method was tested on a self-built database which includes a wide range of human subjects of different genders, poses, and illuminations in real-life fatigue conditions. Experimental results show the effectiveness of the proposed method.

  11. Texture based feature extraction methods for content based medical image retrieval systems.

    Science.gov (United States)

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  12. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    Science.gov (United States)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  13. Extraction of ABCD rule features from skin lesions images with smartphone.

    Science.gov (United States)

    Rosado, Luís; Castro, Rui; Ferreira, Liliana; Ferreira, Márcia

    2012-01-01

    One of the greatest challenges in dermatology today is the early detection of melanoma since the success rates of curing this type of cancer are very high if detected during the early stages of its development. The main objective of the work presented in this paper is to create a prototype of a patient-oriented system for skin lesion analysis using a smartphone. This work aims at implementing a self-monitoring system that collects, processes, and stores information of skin lesions through the automatic extraction of specific visual features. The selection of the features was based on the ABCD rule, which considers 4 visual criteria considered highly relevant for the detection of malignant melanoma. The algorithms used to extract these features are briefly described and the results achieved using images taken from the smartphone camera are discussed.

  14. Extraction of enclosure culture area from SPOT-5 image based on texture feature

    Science.gov (United States)

    Tang, Wei; Zhao, Shuhe; Ma, Ronghua; Wang, Chunhong; Zhang, Shouxuan; Li, Xinliang

    2007-06-01

    The east Taihu lake region is characterized by high-density and large areas of enclosure culture area which tend to cause eutrophication of the lake and worsen the quality of its water. This paper takes an area (380×380) of the east Taihu Lake from image as an example and discusses the extraction method of combing texture feature of high resolution image with spectrum information. Firstly, we choose the best combination bands of 1, 3, 4 according to the principles of the maximal entropy combination and OIF index. After applying algorithm of different bands and principal component analysis (PCA) transformation, we realize dimensional reduction and data compression. Subsequently, textures of the first principal component image are analyzed using Gray Level Co-occurrence Matrices (GLCM) getting statistic Eigen values of contrast, entropy and mean. The mean Eigen value is fixed as an optimal index and a appropriate conditional thresholds of extraction are determined. Finally, decision trees are established realizing the extraction of enclosure culture area. Combining the spectrum information with the spatial texture feature, we obtain a satisfied extracted result and provide a technical reference for a wide-spread survey of the enclosure culture area.

  15. Feature extraction from 3D lidar point clouds using image processing methods

    Science.gov (United States)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  16. IMAGING SPECTROSCOPY AND LIGHT DETECTION AND RANGING DATA FUSION FOR URBAN FEATURES EXTRACTION

    Directory of Open Access Journals (Sweden)

    Mohammed Idrees

    2013-01-01

    Full Text Available This study presents our findings on the fusion of Imaging Spectroscopy (IS and LiDAR data for urban feature extraction. We carried out necessary preprocessing of the hyperspectral image. Minimum Noise Fraction (MNF transforms was used for ordering hyperspectral bands according to their noise. Thereafter, we employed Optimum Index Factor (OIF to statistically select the three most appropriate bands combination from MNF result. The composite image was classified using unsupervised classification (k-mean algorithm and the accuracy of the classification assessed. Digital Surface Model (DSM and LiDAR intensity were generated from the LiDAR point cloud. The LiDAR intensity was filtered to remove the noise. Hue Saturation Intensity (HSI fusion algorithm was used to fuse the imaging spectroscopy and DSM as well as imaging spectroscopy and filtered intensity. The fusion of imaging spectroscopy and DSM was found to be better than that of imaging spectroscopy and LiDAR intensity quantitatively. The three datasets (imaging spectrocopy, DSM and Lidar intensity fused data were classified into four classes: building, pavement, trees and grass using unsupervised classification and the accuracy of the classification assessed. The result of the study shows that fusion of imaging spectroscopy and LiDAR data improved the visual identification of surface features. Also, the classification accuracy improved from an overall accuracy of 84.6% for the imaging spectroscopy data to 90.2% for the DSM fused data. Similarly, the Kappa Coefficient increased from 0.71 to 0.82. on the other hand, classification of the fused LiDAR intensity and imaging spectroscopy data perform poorly quantitatively with overall accuracy of 27.8% and kappa coefficient of 0.0988.

  17. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  18. Optimal Feature Extraction Using Greedy Approach for Random Image Components and Subspace Approach in Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Mathu Soothana S.Kumar Retna Swami; Muneeswaran Karuppiah

    2013-01-01

    An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper.In this framework,features are extracted from the optimal random image components using greedy approach.These feature vectors are then projected to subspaces for dimensionality reduction which is used for solving linear problems.The design of Gabor filters,PCA and MDA are crucial processes used for facial feature extraction.The FERET,ORL and YALE face databases are used to generate the results.Experiments show that optimal random image component selection (ORICS) plus MDA outperforms ORICS and subspace projection approach such as ORICS plus PCA.Our method achieves 96.25%,99.44% and 100% recognition accuracy on the FERET,ORL and YALE databases for 30% training respectively.This is a considerably improved performance compared with other standard methodologies described in the literature.

  19. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    Science.gov (United States)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  20. LOW-LEVEL TIE FEATURE EXTRACTION OF MOBILE MAPPING DATA (MLS/IMAGES AND AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    P. Jende

    2016-03-01

    Full Text Available Mobile Mapping (MM is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform’s position is provided by the integration of Global Navigation Satellite Systems (GNSS and Inertial Navigation Systems (INS. However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform’s defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform’s three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed

  1. Live facial feature extraction

    Institute of Scientific and Technical Information of China (English)

    ZHAO JieYu

    2008-01-01

    Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.

  2. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    Science.gov (United States)

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.

  3. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  4. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Topouzelis

    2008-10-01

    Full Text Available This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  5. Detection of Brain Tumor and Extraction of Texture Features using Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Prof. Dilip Kumar Gandhi

    2012-10-01

    Full Text Available Brain Cancer Detection system is designed. Aim of this paper is to locate the tumor and determine the texture features from a Brain Cancer affected MRI. A computer based diagnosis is performed in order to detect the tumors from given Magnetic Resonance Image. Basic image processing techniques are used to locate the tumor region. Basic techniques consist of image enhancement, image bianarization, and image morphological operations. Texture features are computed using the Gray Level Co-occurrence Matrix. Texture features consists of five distinct features. Selective features or the combination of selective features will be used in the future to determine the class of the query image. Astrocytoma type of Brain Cancer affected images are used only for simplicity

  6. A MapReduce scheme for image feature extraction and its application to man-made object detection

    Science.gov (United States)

    Cai, Fei; Chen, Honghui

    2013-07-01

    A fundamental challenge in image engineering is how to locate interested objects from high-resolution images with efficient detection performance. Several man-made objects detection approaches have been proposed while the majority of these methods are not truly timesaving and suffer low degree of detection precision. To address this issue, we propose a novel approach for man-made object detection in aerial image involving MapReduce scheme for large scale image analysis to support image feature extraction, which can be widely used to compute-intensive tasks in a highly parallel way, and texture feature extraction and clustering. Comprehensive experiments show that the parallel framework saves voluminous time for feature extraction with satisfied objects detection performance.

  7. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  9. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    OpenAIRE

    Lu Ping-ping; Du Kang-ning; Yu Wei-dong; Wang Yu; Deng Yun-kai

    2014-01-01

    Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extracti...

  10. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    Science.gov (United States)

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  11. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    digitization process by computer assisted boundary detection and conversion to a vector layer in a GIS or a spatial database. In colour images, various features can be distinguished based on their colour. The features thus extracted as object border can be stored as vector maps in a GIS or a spatial database......Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manual...... after labelling and editing. Here,we present a complete methodology of the boundary extraction and skeletonization process from colour imagery using a colour image segmentation algorithm, a crust extraction algorithm and a skeleton extraction algorithm. We present also a prototype application...

  12. Interpretation of fingerprint image quality features extracted by self-organizing maps

    Science.gov (United States)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  13. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manua...

  14. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  15. Extraction of Geometric Features of Wear Particles in Color Ferrograph Images Based on RGB Color Space

    Institute of Scientific and Technical Information of China (English)

    CHEN Gui-ming; WANG Han-gong; ZHANG Bao-jun; PAN Wei

    2003-01-01

    This paper analyzes the potential color formats of ferrograph images, and presents the algorithms of converting the formats to RGB(Red, Green, Blue) color space. Through statistical analysis of wear par-ticles' geometric features of color ferrograph images in the RGB color space, we give the differences of ferro-graph wear panicles' geometric features among RGB color spaces and gray scale space, and calculate their respective distributions.

  16. Block truncation coding with color clumps:A novel feature extraction technique for content based image classification

    Indian Academy of Sciences (India)

    SUDEEP THEPADE; RIK DAS; SAURAV GHOSH

    2016-09-01

    The paper has explored principle of block truncation coding (BTC) as a means to perform feature extraction for content based image classification. A variation of block truncation coding, named BTC with color clumps has been implemented in this work to generate feature vectors. Classification performance with the proposed technique of feature extraction has been compared to existing techniques. Two widely used publicdataset named Wang dataset and Caltech dataset have been used for analyses and comparisons of classification performances based on four different metrics. The study has established BTC with color clumps as an effective alternative for feature extraction compared to existing methods. The experiments were carried out in RGB colorspace. Two different categories of classifiers viz. K Nearest Neighbor (KNN) Classifier and RIDOR Classifier were used to measure the classification performances. A paired t test was conducted to establish the statistical significance of the findings. Evaluation of classifier algorithms were done in receiver operating characteristic (ROC) space.

  17. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture

    Directory of Open Access Journals (Sweden)

    Steren Chabert

    2017-01-01

    to predict by themselves the risk of rupture. Therefore, our hypothesis is that the risk of rupture lies on the combination of multiple actors. These actors together would play different roles that could be: weakening of the artery wall, increasing biomechanical stresses on the wall induced by blood flow, in addition to personal sensitivity due to family history, or personal history of comorbidity, or even seasonal variations that could gate different inflammation mechanisms. The main goal of this project is to identify relevant variables that may help in the process of predicting the risk of intracranial aneurysm rupture using machine learning and image processing techniques based on structured and non-structured data from multiple sources. We believe that the identification and the combined use of relevant variables extracted from clinical, demographical, environmental and medical imaging data sources will improve the estimation of the aneurysm rupture risk, with respect to the actual practiced method based essentially on the aneurysm size. The methodology of this work consist of four phases: (1 Data collection and storage, (2 feature extraction from multiple sources in particular from angiographic images, (3 development of the model that could describe the risk of aneurysm rupture based on the fusion and combination of the features, and (4 Identification of relevant variables related to the aneurysm rupture process. This study corresponds to an analytic transversal study with prospective and retrospective characteristics. This work will be based on publicly available health statistics data, data of weather conditions, together with clinical and demographic data of patients diagnosed with intracranial aneurysm in the Hospital Carlos van Buren. As main results of this project we are expecting to identify relevant variables extracted from images and other sources that could play a role in the risk of aneurysm rupture. The proposed model will be presented to the

  18. Classification Features of US Images Liver Extracted with Co-occurrence Matrix Using the Nearest Neighbor Algorithm

    Science.gov (United States)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita; Nicolae, Mariana Carmen

    2011-12-01

    Co-occurrence matrix has been applied successfully for echographic images characterization because it contains information about spatial distribution of grey-scale levels in an image. The paper deals with the analysis of pixels in selected regions of interest of an US image of the liver. The useful information obtained refers to texture features such as entropy, contrast, dissimilarity and correlation extract with co-occurrence matrix. The analyzed US images were grouped in two distinct sets: healthy liver and steatosis (or fatty) liver. These two sets of echographic images of the liver build a database that includes only histological confirmed cases: 10 images of healthy liver and 10 images of steatosis liver. The healthy subjects help to compute four textural indices and as well as control dataset. We chose to study these diseases because the steatosis is the abnormal retention of lipids in cells. The texture features are statistical measures and they can be used to characterize irregularity of tissues. The goal is to extract the information using the Nearest Neighbor classification algorithm. The K-NN algorithm is a powerful tool to classify features textures by means of grouping in a training set using healthy liver, on the one hand, and in a holdout set using the features textures of steatosis liver, on the other hand. The results could be used to quantify the texture information and will allow a clear detection between health and steatosis liver.

  19. Classification and Extraction of Urban Land-Use Information from High-Resolution Image Based on Object Multi-features

    Institute of Scientific and Technical Information of China (English)

    Kong Chunfang; Xu Kai; Wu Chonglong

    2006-01-01

    Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.

  20. Study on image feature extraction and classification for human colorectal cancer using optical coherence tomography

    Science.gov (United States)

    Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei

    2011-06-01

    Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.

  1. The Study on Height Information Extraction of Cultural Features in Remote Sensing Images Based on Shadow Areas

    Science.gov (United States)

    Bao-Ming, Z.; Hai-Tao, G.; Jun, L.; Zhi-Qing, L.; Hong, H.

    2011-09-01

    Cultural feature is important element in geospatial information library and the height information is important information of cultural features. The existences of the height information and its precision have direct influence over topographic map, especially the quality of large-scale and medium-scale topographic map, and the level of surveying and mapping support. There are a lot of methods about height information extraction, in which the main methods are ground survey (field direct measurement) spatial sensor and photogrammetric ways. However, automatic extraction is very tough. This paper has had an emphasis on segmentation algorithm on shadow areas under multiple constraints and realized automatic extraction of height information by using shadow. Binarization image can be obtained using gray threshold estimated under the multiple constraints. On the interesting area, spot elimination and region splitting are made. After region labeling and non-shadowed regions elimination, shadow area of cultural features can be found. Then height of the cultural features can be calculated using shadow length, sun altitude angle, azimuth angle, and sensor altitude angle, azimuth angle. A great many of experiments have shown that mean square error of the height information of cultural features extraction is close to 2 meter and automatic extraction rate is close to 70%.

  2. THE STUDY ON HEIGHT INFORMATION EXTRACTION OF CULTURAL FEATURES IN REMOTE SENSING IMAGES BASED ON SHADOW AREAS

    Directory of Open Access Journals (Sweden)

    Z. Bao-Ming

    2012-09-01

    Full Text Available Cultural feature is important element in geospatial information library and the height information is important information of cultural features. The existences of the height information and its precision have direct influence over topographic map, especially the quality of large-scale and medium-scale topographic map, and the level of surveying and mapping support. There are a lot of methods about height information extraction, in which the main methods are ground survey (field direct measurement spatial sensor and photogrammetric ways. However, automatic extraction is very tough. This paper has had an emphasis on segmentation algorithm on shadow areas under multiple constraints and realized automatic extraction of height information by using shadow. Binarization image can be obtained using gray threshold estimated under the multiple constraints. On the interesting area, spot elimination and region splitting are made. After region labeling and non-shadowed regions elimination, shadow area of cultural features can be found. Then height of the cultural features can be calculated using shadow length, sun altitude angle, azimuth angle, and sensor altitude angle, azimuth angle. A great many of experiments have shown that mean square error of the height information of cultural features extraction is close to 2 meter and automatic extraction rate is close to 70%.

  3. Infiltrate Object Extraction in X-ray Image by Using Math-Morphology Method and Feature Region Analysis

    Directory of Open Access Journals (Sweden)

    Julius Santony

    2016-04-01

    Full Text Available Infiltrate is often called as pulmonary vlek for there are white spotteds on the lung. White spotted could be in form of liquid, condensation, or uncircumcised. The liquid is emerge from blood or suppuration. To detect the existence of infiltrate on the lung, it could be done by doing X-ray Thorax checkup. To observe the infiltrate on x-ray thorax image is unable to be seen by every people, but it is done by experts such as radiologists or the pulmonary experts by doing conscientious research. The research was done by extracting the infiltrate object on x-ray thorax image of tuberculosis patient to clarify the object. The research stage that was done on x-ray thorax image is by detecting the object with segmentation morphology process which consist of dilation and erosion morphology, and side detection by decreasing the value of dilation and erosion morphology. The next stage is extracting the infiltrate object by using binarization and feature region analysis to ommit the unspotted part and determine the infiltrate object from the amount of existing objects. The result of infiltrate object extraction, then, is being calculated for both number and width of each side of lung by using feature region analysis. The result indicates that infiltrate object extraction is able to show an image with the explicit infiltrate object. The result trials of 40 x-ray thorax image on tuberculosis patients proved that the well-extracted images are able to be determined whether on its position, total, and width of infiltrates on lung. The trials of 2 x-ray thorax image on healthy patients are also done as comparisons, and the result indicates that there is no infiltrate objects on both sides of lung.

  4. Feature Extraction of Gesture Recognition Based on Image Analysis for Different Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Rahul A. Dedakiya

    2015-05-01

    Full Text Available Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.

  5. 图像特征提取算法研究%Research on Image Feature Extraction Algorithm

    Institute of Scientific and Technical Information of China (English)

    李亚杰

    2016-01-01

    在计算机视觉技术和图像处理中,都会运用到图像特征提取技术,图像的不变特征可以通过计算机的分析和处理来提取,大大提高了图片处理的准确度和速率。本文主要研究可以提取图像特征的三种经典算法,即SIFT算法、SURF算法以及ASIFT算法,利用Matlab平台、VC平台以及OpenCV库函数对以上算法进行仿真和调试,比较三种算法特征提取的效果,并分析这些算法的性能以及优缺点。%Image feature extraction involves in computer vision techniques and image processing,and the analysis and processing of computer to extract invariant features of the image can greatly improve the image processing accu-racy and speed. This paper mainly studies three kinds of classic algorithms of image feature extraction,namely SIFT algorithm,SURF algorithm and ASIFT algorithm. By using Matlab platform,VC platform and OpenCV library func-tion,the paper simulates and debugs the above algorithms,compares the effect of three feature extraction algo-rithms,and analyzes their performances as well as advantages and disadvantages.

  6. Feature Point Extraction from the Local Frequency Map of an Image

    Directory of Open Access Journals (Sweden)

    Jesmin Khan

    2012-01-01

    Full Text Available We propose a novel technique for detecting rotation- and scale-invariant interest points from the local frequency representation of an image. Local or instantaneous frequency is the spatial derivative of the local phase, where the local phase of any signal can be found from its Hilbert transform. Local frequency estimation can detect edge, ridge, corner, and texture information at the same time, and it shows high values at those dominant features of an image. For each pixel, we select an appropriate width of the window for computing the derivative of the phase. In order to select the width of the window for any given pixel, we make use of the measure of the extent to which the phases, in the neighborhood of that pixel, are in the same direction. The local frequency map, thus obtained, is then thresholded by employing a global thresholding approach to detect the interest or feature points. Repeatability rate, a performance evaluation criterion for an interest point detector, is used to check the geometric stability of the proposed method under different transformations. We present simulation results of the detection of feature points from image utilizing the suggested technique and compare the proposed method with five existing approaches that yield good results. The results prove the efficacy of the proposed feature point detection algorithm. Moreover, in terms of repeatability rate; the results show that the performance of the proposed method with respect to different aspect is compatible with the existing methods.

  7. A Combined Approach on RBC Image Segmentation through Shape Feature Extraction

    Directory of Open Access Journals (Sweden)

    Ruihu Wang

    2012-01-01

    Full Text Available The classification of erythrocyte plays an important role in clinic diagnosis. In terms of the fact that the shape deformability of red blood cell brings more difficulty in detecting and recognize for operating automatically, we believed that the recovered 3D shape surface feature would give more information than traditional 2D intensity image processing methods. This paper proposed a combined approach for complex surface segmentation of red blood cell based on shape-from-shading technique and multiscale surface fitting. By means of the image irradiance equation under SEM imaging condition, the 3D height field could be recovered from the varied shading. Afterwards the depth maps of each point on the surfaces were applied to calculate Gaussian curvature and mean curvature, which were used to produce surface-type label image. Accordingly the surface was segmented into different parts through multiscale bivariate polynomials function fitting. The experimental results showed that this approach was easily implemented and promising.

  8. Image mining for investigative pathology using optimized feature extraction and data fusion.

    Science.gov (United States)

    Chen, Wenjin; Meer, Peter; Georgescu, Bogdan; He, Wei; Goodell, Lauri A; Foran, David J

    2005-07-01

    In many subspecialties of pathology, the intrinsic complexity of rendering accurate diagnostic decisions is compounded by a lack of definitive criteria for detecting and characterizing diseases and their corresponding histological features. In some cases, there exists a striking disparity between the diagnoses rendered by recognized authorities and those provided by non-experts. We previously reported the development of an Image Guided Decision Support (IGDS) system, which was shown to reliably discriminate among malignant lymphomas and leukemia that are sometimes confused with one another during routine microscopic evaluation. As an extension of those efforts, we report here a web-based intelligent archiving subsystem that can automatically detect, image, and index new cells into distributed ground-truth databases. Systematic experiments showed that through the use of robust texture descriptors and density estimation based fusion the reliability and performance of the governing classifications of the system were improved significantly while simultaneously reducing the dimensionality of the feature space.

  9. Image Prediction Method with Nonlinear Control Lines Derived from Kriging Method with Extracted Feature Points Based on Morphing

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-01-01

    Full Text Available Method for image prediction with nonlinear control lines which are derived from extracted feature points from the previously acquired imagery data based on Kriging method and morphing method is proposed. Through comparisons between the proposed method and the conventional linear interpolation and widely used Cubic Spline interpolation methods, it is found that the proposed method is superior to the conventional methods in terms of prediction accuracy.

  10. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  11. Analysis of Contourlet Texture Feature Extraction to Classify the Benign and Malignant Tumors from Breast Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Prabhakar Telagarapu

    2014-03-01

    Full Text Available The number of Breast cancer has been increasing over the past three decades. Early detection of breast cancer is crucial for an effective treatment. Mammography is used for early detection and screening. Especially for young women, mammography procedures may not be very comfortable. Moreover, it involves ionizing radiation. Ultrasound is broadly popular medical imaging modality because of its non-invasive, real time, convenient and low cost nature. However, the excellence of ultrasound image is corrupted by a speckle noise. The presence of speckle noise severely degrades the signal-to noise ratio (SNR and contrast resolution of the image. Therefore speckle noise need to be reduced before extracting the features. In this research focus on developing an algorithm to reduce the speckle noise, feature extraction and classification methods for benign and malignant tumors showed that SVM-Polynomial classification produces a high classification rate (77% for Grey level Co-occurrence matrix (GLCM based Contourlet features for wavelet soft thresholding denoised breast ultrasound images.

  12. Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM

    Science.gov (United States)

    Ray, Arun Kumar; Thethi, Har Pal

    2017-01-01

    The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer aided technology becomes very necessary to overcome these limitations. In this study, to improve the performance and reduce the complexity involves in the medical image segmentation process, we have investigated Berkeley wavelet transformation (BWT) based brain tumor segmentation. Furthermore, to improve the accuracy and quality rate of the support vector machine (SVM) based classifier, relevant features are extracted from each segmented tissue. The experimental results of proposed technique have been evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 96.51% accuracy, 94.2% specificity, and 97.72% sensitivity, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 0.82 dice similarity index coefficient, which indicates better overlap between the automated (machines) extracted tumor region with manually extracted tumor region by radiologists. The simulation results prove the significance in terms of quality parameters and accuracy in comparison to state-of-the-art techniques.

  13. Unsupervised clustering analyses of features extraction for a caries computer-assisted diagnosis using dental fluorescence images

    Science.gov (United States)

    Bessani, Michel; da Costa, Mardoqueu M.; Lins, Emery C. C. C.; Maciel, Carlos D.

    2014-02-01

    Computer-assisted diagnoses (CAD) are performed by systems with embedded knowledge. These systems work as a second opinion to the physician and use patient data to infer diagnoses for health problems. Caries is the most common oral disease and directly affects both individuals and the society. Here we propose the use of dental fluorescence images as input of a caries computer-assisted diagnosis. We use texture descriptors together with statistical pattern recognition techniques to measure the descriptors performance for the caries classification task. The data set consists of 64 fluorescence images of in vitro healthy and carious teeth including different surfaces and lesions already diagnosed by an expert. The texture feature extraction was performed on fluorescence images using RGB and YCbCr color spaces, which generated 35 different descriptors for each sample. Principal components analysis was performed for the data interpretation and dimensionality reduction. Finally, unsupervised clustering was employed for the analysis of the relation between the output labeling and the diagnosis of the expert. The PCA result showed a high correlation between the extracted features; seven components were sufficient to represent 91.9% of the original feature vectors information. The unsupervised clustering output was compared with the expert classification resulting in an accuracy of 96.88%. The results show the high accuracy of the proposed approach in identifying carious and non-carious teeth. Therefore, the development of a CAD system for caries using such an approach appears to be promising.

  14. Unsupervised Multimodal Magnetic Resonance Images Segmentation and Multiple Sclerosis Lesions Extraction based on Edge and Texture Features

    Directory of Open Access Journals (Sweden)

    Tannaz AKBARPOUR

    2017-06-01

    Full Text Available Segmentation of Multiple Sclerosis (MS lesions is a crucial part of MS diagnosis and therapy. Segmentation of lesions is usually performed manually, exposing this process to human errors. Thus, exploiting automatic and semi-automatic methods is of interest. In this paper, a new method is proposed to segment MS lesions from multichannel MRI data (T1-W and T2-W. For this purpose, statistical features of spatial domain and wavelet coefficients of frequency domain are extracted for each pixel of skull-stripped images to form a feature vector. An unsupervised clustering algorithm is applied to group pixels and extracts lesions. Experimental results demonstrate that the proposed method is better than other state of art and contemporary methods of segmentation in terms of Dice metric, specificity, false-positive-rate, and Jaccard metric.

  15. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    Science.gov (United States)

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Image feature extraction in encrypted domain with privacy-preserving SIFT.

    Science.gov (United States)

    Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang

    2012-11-01

    Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.

  17. WAVELET BASED CONTENT BASED IMAGE RETRIEVAL USING COLOR AND TEXTURE FEATURE EXTRACTION BY GRAY LEVEL COOCURENCE MATRIX AND COLOR COOCURENCE MATRIX

    Directory of Open Access Journals (Sweden)

    Jeyanthi Prabhu

    2014-01-01

    Full Text Available In this study we proposes an effective content based image retrieval by color and texture based on wavelet coefficient method to achieve good retrieval in efficiency. Color feature extraction is done by color Histogram. The texture feature extraction is acquired by Gray Level Coocurence Matrix (GLCM or Color Coocurence Matrix (CCM. This study provides better result for image retrieval by integrated features. Feature extraction by color Histogram, texture by GLCM, texture by CCM are compared in terms of precision performance measure.

  18. Meta-optimization of the extended kalman filter's parameters for improved feature extraction on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon, BP

    2011-07-01

    Full Text Available -OPTIMIZATION OF THE EXTENDED KALMAN FILTER?S PARAMETERS FOR IMPROVED FEATURE EXTRACTION ON HYPER-TEMPORAL IMAGES yzB.P. Salmon, yzW. Kleynhans, zF. van den Bergh, yJ.C. Olivier, W.J. Marais and zK.J. Wessels yDepartment of Electrical, Electronic and Computer Engineering... mod- ulated cosine function to improve land cover separation [3]. This paper proposes an extension to [3], that each of the first two spectral bands be modelled separately as a triply modu- lated cosine function and is expressed as yi;k;b = i;k;b...

  19. Rapid Feature Extraction for Optical Character Recognition

    CERN Document Server

    Hossain, M Zahid; Yan, Hong

    2012-01-01

    Feature extraction is one of the fundamental problems of character recognition. The performance of character recognition system is depends on proper feature extraction and correct classifier selection. In this article, a rapid feature extraction method is proposed and named as Celled Projection (CP) that compute the projection of each section formed through partitioning an image. The recognition performance of the proposed method is compared with other widely used feature extraction methods that are intensively studied for many different scripts in literature. The experiments have been conducted using Bangla handwritten numerals along with three different well known classifiers which demonstrate comparable results including 94.12% recognition accuracy using celled projection.

  20. A multi-scale method for automatically extracting the dominant features of cervical vertebrae in CT images

    Directory of Open Access Journals (Sweden)

    Tung-Ying Wu

    2013-07-01

    Full Text Available Localization of the dominant points of cervical spines in medical images is important for improving the medical automation in clinical head and neck applications. In order to automatically identify the dominant points of cervical vertebrae in neck CT images with precision, we propose a method based on multi-scale contour analysis to analyzing the deformable shape of spines. To extract the spine contour, we introduce a method to automatically generate the initial contour of the spine shape, and the distance field for level set active contour iterations can also be deduced. In the shape analysis stage, we at first coarsely segment the extracted contour with zero-crossing points of the curvature based on the analysis with curvature scale space, and the spine shape is modeled with the analysis of curvature scale space. Then, each segmented curve is analyzed geometrically based on the turning angle property at different scales, and the local extreme points are extracted and verified as the dominant feature points. The vertices of the shape contour are approximately derived with the analysis at coarse scale, and then adjusted precisely at fine scale. Consequently, the results of experiment show that we approach a success rate of 93.4% and accuracy of 0.37mm by comparing with the manual results.

  1. An artificial intelligence based improved classification of two-phase flow patterns with feature extracted from acquired images.

    Science.gov (United States)

    Shanthi, C; Pappa, N

    2017-05-01

    Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. A novel non-linear recursive filter design for extracting high rate pulse features in nuclear medicine imaging and spectroscopy.

    Science.gov (United States)

    Sajedi, Salar; Kamal Asl, Alireza; Ay, Mohammad R; Farahani, Mohammad H; Rahmim, Arman

    2013-06-01

    Applications in imaging and spectroscopy rely on pulse processing methods for appropriate data generation. Often, the particular method utilized does not highly impact data quality, whereas in some scenarios, such as in the presence of high count rates or high frequency pulses, this issue merits extra consideration. In the present study, a new approach for pulse processing in nuclear medicine imaging and spectroscopy is introduced and evaluated. The new non-linear recursive filter (NLRF) performs nonlinear processing of the input signal and extracts the main pulse characteristics, having the powerful ability to recover pulses that would ordinarily result in pulse pile-up. The filter design defines sampling frequencies lower than the Nyquist frequency. In the literature, for systems involving NaI(Tl) detectors and photomultiplier tubes (PMTs), with a signal bandwidth considered as 15 MHz, the sampling frequency should be at least 30 MHz (the Nyquist rate), whereas in the present work, a sampling rate of 3.3 MHz was shown to yield very promising results. This was obtained by exploiting the known shape feature instead of utilizing a general sampling algorithm. The simulation and experimental results show that the proposed filter enhances count rates in spectroscopy. With this filter, the system behaves almost identically as a general pulse detection system with a dead time considerably reduced to the new sampling time (300 ns). Furthermore, because of its unique feature for determining exact event times, the method could prove very useful in time-of-flight PET imaging.

  3. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  4. A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction.

    Directory of Open Access Journals (Sweden)

    Fusong Yuan

    Full Text Available To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions.A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject's upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests.The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05.Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is

  5. Texture feature extraction for the lung lesion density classification on computed tomography scan image

    Directory of Open Access Journals (Sweden)

    Hasnely

    2016-05-01

    Full Text Available The radiology examination by computed tomography (CT scan is an early detection of lung cancer to minimize the mortality rate. However, the assessment and diagnosis by an expert are subjective depending on the competence and experience of a radiologist. Hence, a digital image processing of CT scan is necessary as a tool to diagnose the lung cancer. This research proposes a morphological characteristics method for detecting lung cancer lesion density by using the histogram and GLCM (Gray Level Co-occurrence Matrices. The most well-known artificial neural network (ANN architecture that is the multilayers perceptron (MLP, is used in classifying lung cancer lesion density of heterogeneous and homogeneous. Fifty CT scan images of lungs obtained from the Department of Radiology of RSUP Dr. Sardjito Hospital, Yogyakarta are used as the database. The results show that the proposed method achieved the accuracy of 98%, sensitivity of 96%, and specificity of 96%.

  6. Early detection and classification of powdery mildew-infected rose leaves using ANFIS based on extracted features of thermal images

    Science.gov (United States)

    Jafari, Mehrnoosh; Minaei, Saeid; Safaie, Naser; Torkamani-Azar, Farah

    2016-05-01

    Spatial and temporal changes in surface temperature of infected and non-infected rose plant (Rosa hybrida cv. 'Angelina') leaves were visualized using digital infrared thermography. Infected areas exhibited a presymptomatic decrease in leaf temperature up to 2.3 °C. In this study, two experiments were conducted: one in the greenhouse (semi-controlled ambient conditions) and the other, in a growth chamber (controlled ambient conditions). Effect of drought stress and darkness on the thermal images were also studied in this research. It was found that thermal histograms of the infected leaves closely follow a standard normal distribution. They have a skewness near zero, kurtosis under 3, standard deviation larger than 0.6, and a Maximum Temperature Difference (MTD) more than 4. For each thermal histogram, central tendency, variability, and parameters of the best fitted Standard Normal and Laplace distributions were estimated. To classify healthy and infected leaves, feature selection was conducted and the best extracted thermal features with the largest linguistic hedge values were chosen. Among those features independent of absolute temperature measurement, MTD, SD, skewness, R2l, kurtosis and bn were selected. Then, a neuro-fuzzy classifier was trained to recognize the healthy leaves from the infected ones. The k-means clustering method was utilized to obtain the initial parameters and the fuzzy "if-then" rules. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters. Results showed that drought stress had an adverse effect on the classification of healthy leaves. More healthy leaves under drought stress condition were classified as infected causing PPV and Specificity index values to decrease, accordingly. Image acquisition in the dark had no significant effect on the classification performance.

  7. Feature Extraction Using Mfcc

    Directory of Open Access Journals (Sweden)

    Shikha Gupta

    2013-08-01

    Full Text Available Mel Frequency Ceptral Coefficient is a very common and efficient technique for signal processing. Thispaper presents a new purpose of working with MFCC by using it for Hand gesture recognition. Theobjective of using MFCC for hand gesture recognition is to explore the utility of the MFCC for imageprocessing. Till now it has been used in speech recognition, for speaker identification. The present systemis based on converting the hand gesture into one dimensional (1-D signal and then extracting first 13MFCCs from the converted 1-D signal. Classification is performed by using Support Vector Machine.Experimental results represents that proposed application of using MFCC for gesture recognition havevery good accuracy and hence can be used for recognition of sign language or for other householdapplication with the combination for other techniques such as Gabor filter, DWT to increase the accuracyrate and to make it more efficient.

  8. Digital Image Assisted Point Cloud Feature Extraction%数字图像辅助激光点云特征提取

    Institute of Scientific and Technical Information of China (English)

    周春艳; 罗敏; 邹峥嵘

    2011-01-01

    提出了一种新颖的激光点云特征提取方法。将点云数据中的点映射到二维图像中的像素,然后使用数字图像处理算法从二维图像中提取感兴趣的特征,根据对应关系得到图像中的特征在点云中的对应点集,对点集进行拟合以剔除噪声点获得更加精确的特征点集。实验表明,此方法能够快速和比较精确地提取点云特征。%A novel method for feature extraction from point cloud is proposed.Each point in the point cloud is corresponded to pixel in the digital image,then features of interest are extracted from the digital image by using existed image processing algorithms,according to relationship between the digital image and the point cloud,features in digital image can find their corresponding points in point cloud,and these points are the very points set of features.Points set are fitted to obtain more accurate feature points set.It is proven that this method can extract the point cloud features quickly and more precisely.

  9. Features extraction algorithm of CT image based on GNSCT-LCM%基于NSCT-GLCM的CT图像特征提取算法

    Institute of Scientific and Technical Information of China (English)

    张人上

    2014-01-01

    Feature extraction is a key problem for the mass CT image segmentation, a novel features extraction algorithm of CT image is proposed based on Non-Subsampled Contourlet Transform(NSCT)and Gray Level Co-occurrence Matrix (GLCM)in this paper. Firstly, CT image is multi-scale, multi direction decomposed by the NSCT, and the co-occurrence features of sub-images are extracted by GLCM, and then the redundant features are eliminated by the principal component analysis and feature vectors are composed, finally CT image is segmented by the support vector machine based on multi-feature vector space. The experimental results show that the proposed algorithm can extract features of CT image, and has improved the segmentation accuracy of CT images, can provide assisted information for the doctor diagnosis.%针对海量CT图像分割中特征提取的难题,提出一种非下采样轮廓变换(NSCT)和灰度共生矩阵(GLCM)相融合的CT图像特征提取算法。首先采用NSCT对CT图像进行多尺度、多方向分解,并采用GLCM提取子带图像的共生特征量,然后对共生特征量进行主成分分析,消除冗余特征量,构成多特征矢量,最后利用支持向量机完成多特征矢量空间的划分,实现CT图像分割。实验结果表明,NSCT-GLCM能够较好地提取CT图像特征,提高了CT图像分割准确率,可以为医生诊断提供辅助信息。

  10. Feature extraction using fractal codes

    NARCIS (Netherlands)

    Schouten, Ben; Zeeuw, Paul M. de

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  11. Feature Extraction Using Fractal Codes

    NARCIS (Netherlands)

    Schouten, B.A.M.; Zeeuw, P.M. de

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  12. SU-D-BRA-07: A Phantom Study to Assess the Variability in Radiomics Features Extracted From Cone-Beam CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Fave, X; Fried, D [UT MD Anderson Cancer Center, Houston, TX (United States); UT Health Science Center Graduate School of Biomedical Sciences, Houston, TX (United States); Zhang, L; Yang, J; Balter, P; Followill, D; Gomez, D; Jones, A; Stingo, F; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Several studies have demonstrated the prognostic potential for texture features extracted from CT images of non-small cell lung cancer (NSCLC) patients. The purpose of this study was to determine if these features could be extracted with high reproducibility from cone-beam CT (CBCT) images in order for features to be easily tracked throughout a patient’s treatment. Methods: Two materials in a radiomics phantom, designed to approximate NSCLC tumor texture, were used to assess the reproducibility of 26 features. This phantom was imaged on 9 CBCT scanners, including Elekta and Varian machines. Thoracic and head imaging protocols were acquired on each machine. CBCT images from 27 NSCLC patients imaged using the thoracic protocol on Varian machines were obtained for comparison. The variance for each texture measured from these patients was compared to the variance in phantom values for different manufacturer/protocol subsets. Levene’s test was used to identify features which had a significantly smaller variance in the phantom scans versus the patient data. Results: Approximately half of the features (13/26 for material1 and 15/26 for material2) had a significantly smaller variance (p<0.05) between Varian thoracic scans of the phantom compared to patient scans. Many of these same features remained significant for the head scans on Varian (12/26 and 8/26). However, when thoracic scans from Elekta and Varian were combined, only a few features were still significant (4/26 and 5/26). Three features (skewness, coarsely filtered mean and standard deviation) were significant in almost all manufacturer/protocol subsets. Conclusion: Texture features extracted from CBCT images of a radiomics phantom are reproducible and show significantly less variation than the same features measured from patient images when images from the same manufacturer or with similar parameters are used. Reproducibility between CBCT scanners may be high enough to allow the extraction of

  13. Identification of error making patterns in lesion detection on digital breast tomosynthesis using computer-extracted image features

    Science.gov (United States)

    Wang, Mengyu; Zhang, Jing; Grimm, Lars J.; Ghate, Sujata V.; Walsh, Ruth; Johnson, Karen S.; Lo, Joseph Y.; Mazurowski, Maciej A.

    2016-03-01

    Digital breast tomosynthesis (DBT) can improve lesion visibility by eliminating the issue of overlapping breast tissue present in mammography. However, this new modality likely requires new approaches to training. The issue of training in DBT is not well explored. We propose a computer-aided educational approach for DBT training. Our hypothesis is that the trainees' educational outcomes will improve if they are presented with cases individually selected to address their weaknesses. In this study, we focus on the question of how to select such cases. Specifically, we propose an algorithm that based on previously acquired reading data predicts which lesions will be missed by the trainee for future cases (i.e., we focus on false negative error). A logistic regression classifier was used to predict the likelihood of trainee error and computer-extracted features were used as the predictors. Reader data from 3 expert breast imagers was used to establish the ground truth and reader data from 5 radiology trainees was used to evaluate the algorithm performance with repeated holdout cross validation. Receiver operating characteristic (ROC) analysis was applied to measure the performance of the proposed individual trainee models. The preliminary experimental results for 5 trainees showed the individual trainee models were able to distinguish the lesions that would be detected from those that would be missed with the average area under the ROC curve of 0.639 (95% CI, 0.580-0.698). The proposed algorithm can be used to identify difficult cases for individual trainees.

  14. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2014-01-01

    Full Text Available For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  15. Features extraction of flotation froth images and BP neural network soft-sensor model of concentrate grade optimized by shuffled cuckoo searching algorithm.

    Science.gov (United States)

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  16. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  17. Infrared image mosaic using point feature operators

    Science.gov (United States)

    Huang, Zhen; Sun, Shaoyuan; Shen, Zhenyi; Hou, Junjie; Zhao, Haitao

    2016-10-01

    In this paper, we study infrared image mosaic around a single point of rotation, aiming at expanding the narrow view range of infrared images. We propose an infrared image mosaic method using point feature operators including image registration and image synthesis. Traditional mosaic algorithms usually use global image registration methods to extract the feature points in the global image, which cost too much time as well as considerable matching errors. To address this issue, we first roughly calculate the image shift amount using phase correlation and determine the overlap region between images, and then extract image features in overlap region, which shortens the registration time and increases the quality of feature points. We improve the traditional algorithm through increasing constraints of point matching based on prior knowledge of image shift amount based on which the weighted map is computed using fade in-out method. The experimental results verify that the proposed method has better real time performance and robustness.

  18. Multispectral Image Feature Points

    Directory of Open Access Journals (Sweden)

    Cristhian Aguilera

    2012-09-01

    Full Text Available This paper presents a novel feature point descriptor for the multispectral image case: Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

  19. PROJECTION BASED STALTISTICAL FEATURE EXTRACTION WITH MULTISPECTRAL IMAGES AND ITS APPLICATIONS ON THE YELLOW RIVER MAINSTREAM LINE DETECTION

    Institute of Scientific and Technical Information of China (English)

    Zhang Yanning; Zhang Haichao; Duan Feng; Liu Xuegong; Han Lin

    2009-01-01

    Mainstream line is significant for the Yellow River situation forecasting and flood control. An effective statistical feature extraction method is proposed in this paper.In this method,a be tween-class scattering matrix based projection algorithm is performed to maximize between-class dif ferences,obtaining effective component for classification;then high-order statistics are utilized as the features to describe the mainstream line in the principal component obtained.Experiments are per formed to verify the applicability of the algorithm.The results both on synthesized and real scenes indicate that this approach could extract the mainstream line of the Yellow River automatically,and has a high precision in mainstream line detection.

  20. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  1. COLOR FEATURE EXTRACTION FOR CBIR

    Directory of Open Access Journals (Sweden)

    Dr. H.B.KEKRE

    2011-12-01

    Full Text Available Content Based Image Retrieval is the application of computer vision techniques to the image retrieval problem of searching for digital images in large databases. The method of CBIR discussed in this paper can filter images based their content and would provide a better indexing and return more accurate results. In this paper we wouldbe discussing: Feature vector generation using color averaging technique, Similarity measures and Performance evaluation using randomly selected 5 query images per class out of which result of one class is discussed. Precision –Recall cross over plot is used as the performance evaluation measure to check the algorithm. As thesystem developed is generic, database consists of images from different classes. The effect due to the size of database and number of different classes is seen on the number of relevancy of the retrievals.

  2. Study of LDA algorithm in image feature extraction%LDA算法在图像特征提取中的研究

    Institute of Scientific and Technical Information of China (English)

    钟彩

    2014-01-01

    The cells of the body is directly related to the performance indexes of the body,In the process of the image feature;Microscopic cell image data is very complex;These data have great inlfuence on the study will be the next step. In order to improve the recognition effect, In this paper, red blood cells in the urine image analysis as an example, Using LDA algorithm;The main feature of the cell image transform, Feature extraction of data;In order to achieve the purpose of image feature extraction.%人体细胞直接关系到身体各项指。标的性能,在图像特征的研究过程中,显。微细胞图像数据非常复杂,这些数据对下一步的研究将会受到很大的影响。为了提高图像的识别效果,文章以尿液中红细胞图像分析研究为例,采用LDA算法,对细胞图像的主要特征进行变换,提取数据的主要特征,以达到提取图像特征的目的。

  3. Automated feature extraction by combining polarimetric SAR and object-based image analysis for monitoring of natural resource exploitation

    OpenAIRE

    Plank, Simon; Mager, Alexander; Schöpfer, Elisabeth

    2015-01-01

    An automated feature extraction procedure based on the combination of a pixel-based unsupervised classification of polarimetric synthetic aperture radar data (PolSAR) and an object-based post-classification is presented. High resolution SpotLight dual-polarimetric (HH/VV) TerraSAR-X imagery acquired over the Doba basin, Chad, is used for method development and validation. In an iterative training procedure the best suited polarimetric speckle filter, processing parameters for the following en...

  4. 基于NSCT-GLCM的印章图像特征提取%Feature extraction of seal image based on GLCM in NSCT domain

    Institute of Scientific and Technical Information of China (English)

    余彪; 万水龙; 刘进; 赵爽

    2014-01-01

    为了有效提取印章图像的特征,提出了一种基于 NSCT Ⅱ GLCM 的印章图像特征提取方法。首先对图像进行 NSCT 分解,得到多幅子图像;然后通过 GLCM 提取图像的特征信息,并用特征参数描述该矩阵,用该特征参数构成的特征向量代表图像的特征信息;最后利用欧式距离对图像进行分类。针对印章图像进行了大量的实验,并与 GLCM 法和LBP-GLCM 融合法进行了比较。结果表明,该方法识别率更高。%In order to effectively extract the seal image characteristics , feature extraction method of seal image based on GLCM in NSCT domain is proposed in this paper . Firstly , seal image is decomposed by NSCT to obtain multiple subband images , and the characteristic information is extracted according to GLCM , the characteristic parameters are used to describe the matrix . Use the fea-ture vector made of the characteristic parameters to represent the feature information of images . Finally , different types of seal im-ages are distinguished by Euclidean distance in the feature space . Experimental results on the seal image show that compared with the GLCM and LBP-GLCM fusion method , the method proposed in this paper has a higher recognition rate .

  5. Extraction of essential features by quantum density

    Science.gov (United States)

    Wilinski, Artur

    2016-09-01

    In this paper we consider the problem of feature extraction, as an essential and important search of dataset. This problem describe the real ownership of the signals and images. Searches features are often difficult to identify because of data complexity and their redundancy. Here is shown a method of finding an essential features groups, according to the defined issues. To find the hidden attributes we use a special algorithm DQAL with the quantum density for thej-th features from original data, that indicates the important set of attributes. Finally, they have been generated small sets of attributes for subsets with different properties of features. They can be used to the construction of a small set of essential features. All figures were made in Matlab6.

  6. Feature extraction for speaker diarization

    OpenAIRE

    Negre Rabassa, Enric

    2016-01-01

    Se explorarán y compararán diferentes características de bajo y alto nivel para la diarización automática de locutores Feature extraction for speaker diarization using different databases Extracción de características para la diarización de locutores utilizando diferentes bases de datos Extracció de caracteristiques per a la diarització de locutors utilitzant diferents bases de dades

  7. 月球表面图像的SIFT特征提取与匹配%SIFT Feature Extraction and Matching of Lunar Surface Image

    Institute of Scientific and Technical Information of China (English)

    陈坤; 王璐; 储珺

    2011-01-01

    在分析月球表面不同尺度、不同谱段图像特点的基础上,本文以Visual C++和OpenCV为开发平台,对月球表面多光谱图像数据和多尺度图像对数据进行特征点提取和匹配,并验证算法的有效性.实验结果表明,改进的SIFT特征具有旋转、平移、尺度缩放、亮度变化和视角变换的不变性,能较好地完成月球表面各种图像的特征提取和匹配.%On the basis of analyzing multi-spectral and different-scale lunar surface image feature points, this paper extracts feature point and realizes stereo matching of the multispectral lunar surface image data and multi-scale image pairs data, and verifies the effectiveness of the proposed algorithm with Visual C ++ and OpenCV development platform. Results show that improved SIFT feature is invariant to rotation, scale, intensity roughness, preserves stability of view variations, and completes feature extraction and matching of lunar surface image better.

  8. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  9. 飞机红外灰度图像实时特征提取%Real-time Feature Extraction to Infrared Gray Plane Image

    Institute of Scientific and Technical Information of China (English)

    马惠敏; 郑链; 王克勇

    2001-01-01

    This paper presented a top-to-bottom self-adapt thresholds selection method for grey image segmentation according to the target character. Infrared plane image become grey image through these thresholds, the edge feature and angel feature are extracted from it then sent to the pattern classifier as the inputs.%根据目标特性自顶向下提出了一种灰度图像分割自适应门限的选取方法,利用门限对飞机红外图像进行灰度分级得到飞机灰度图像,从灰度图像中提取边缘特征和不变性角特征,为分类识别提供特征输入。

  10. Online Feature Extraction Algorithms for Data Streams

    Science.gov (United States)

    Ozawa, Seiichi

    Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.

  11. Fingerprint Image's Feature Extraction and Noting Information%指纹图像的特征提取及特征点记录

    Institute of Scientific and Technical Information of China (English)

    杨娱

    2009-01-01

    This paper focus on the study of the fingerprint image's feature extraction approach, which is based on minutia features. In the aspect post-processing, a new approach to diminate false features is presented. The elimination algorithm of false features is based on the distributing disciplinarian of fingerprint features. In the noting information of features, we raise a new method. The method doesn't depend on the existence of the center of fingerprint in images and uses the ridge count as the distance between two features.%该文主要研究了基于细节点的方法来提取指纹特征特点,在后处理部分.结合细化指纹图像中细节特征点固有的分布规律,提出了一种新的滤除指纹细节伪特征点的方法.在记录特征点信息部分,该文提出了一种新的记录特征点信息的方法.该方法不依赖于指纹中心点的录入并将特征点之间的距离用纹线数表示,从而增加了算法的鲁棒性.

  12. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA

    2016-05-01

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system

  13. Hepatic CT image query using Gabor features

    Institute of Scientific and Technical Information of China (English)

    Chenguang Zhao(赵晨光); Hongyan Cheng(程红岩); Tiange Zhuang(庄天戈)

    2004-01-01

    A retrieval scheme for liver computerize tomography (CT) images based on Gabor texture is presented.For each hepatic CT image, we manually delineate abnormal regions within liver area. Then, a continuous Gabor transform is utilized to analyze the texture of the pathology bearing region and extract the corresponding feature vectors. For a given sample image, we compare its feature vector with those of other images. Similar images with the highest rank are retrieved. In experiments, 45 liver CT images are collected, and the effectiveness of Gabor texture for content based retrieval is verified.

  14. Abdominal tuberculosis: Imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Jose M. [Department of Radiology, Hospital de S. Joao, Porto (Portugal)]. E-mail: jmpjesus@yahoo.com; Madureira, Antonio J. [Department of Radiology, Hospital de S. Joao, Porto (Portugal); Vieira, Alberto [Department of Radiology, Hospital de S. Joao, Porto (Portugal); Ramos, Isabel [Department of Radiology, Hospital de S. Joao, Porto (Portugal)

    2005-08-01

    Radiological findings of abdominal tuberculosis can mimic those of many different diseases. A high level of suspicion is required, especially in high-risk population. In this article, we will describe barium studies, ultrasound (US) and computed tomography (CT) findings of abdominal tuberculosis (TB), with emphasis in the latest. We will illustrate CT findings that can help in the diagnosis of abdominal tuberculosis and describe imaging features that differentiate it from other inflammatory and neoplastic diseases, particularly lymphoma and Crohn's disease. As tuberculosis can affect any organ in the abdomen, emphasis is placed to ileocecal involvement, lymphadenopathy, peritonitis and solid organ disease (liver, spleen and pancreas). A positive culture or hystologic analysis of biopsy is still required in many patients for definitive diagnosis. Learning objectives:1.To review the relevant pathophysiology of abdominal tuberculosis. 2.Illustrate CT findings that can help in the diagnosis.

  15. Featured Image: Interacting Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2017-06-01

    This beautiful image shows two galaxies, IC 2163 and NGC 2207, as they undergo a grazing collision 114 million light-years away. The image is composite, constructed from Hubble (blue), Spitzer (green), and ALMA (red) data. In a recent study, Debra Elmegreen (Vassar College) and collaborators used this ALMA data to trace the individual molecular clouds in the two interacting galaxies, identifying a total of over 200 clouds that each contain a mass of over a million solar masses. These clouds represent roughly half the molecular gas in the two galaxies total. Elmegreen and collaborators track the properties of these clouds and their relation to star-forming regions observed with Hubble. For more information about their observations, check out the paper linked below.A closer look at the ALMA observations for these galaxies, with the different emission regions labeled. Most of the molecular gas emission comes from the eyelids of IC 2163, and the nuclear ring and Feature i in NGC 2207. [Elmegreen et al. 2017]CitationDebra Meloy Elmegreen et al 2017 ApJ 841 43. doi:10.3847/1538-4357/aa6ba5

  16. Localized scleroderma: imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Liu, P. (Dept. of Diagnostic Imaging, Hospital for Sick Children, Toronto, ON (Canada)); Uziel, Y. (Div. of Rheumatology, Hospital for Sick Children, Toronto, ON (Canada)); Chuang, S. (Dept. of Diagnostic Imaging, Hospital for Sick Children, Toronto, ON (Canada)); Silverman, E. (Div. of Rheumatology, Hospital for Sick Children, Toronto, ON (Canada)); Krafchik, B. (Div. of Dermatology, Dept. of Pediatrics, Hospital for Sick Children, Toronto, ON (Canada)); Laxer, R. (Div. of Rheumatology, Hospital for Sick Children, Toronto, ON (Canada))

    1994-06-01

    Localized scleroderma is distinct from the diffuse form of scleroderma and does not show Raynaud's phenomenon and visceral involvement. The imaging features in 23 patients ranging from 2 to 17 years of age (mean 11.1 years) were reviewed. Leg length discrepancy and muscle atrophy were the most common findings (five patients), with two patients also showing modelling deformity of the fibula. One patient with lower extremity involvement showed abnormal bone marrow signals on MR. Disabling joint contracture requiring orthopedic intervention was noted in one patient. In two patients with ''en coup de sabre'' facial deformity, CT and MR scans revealed intracranial calcifications and white matter abnormality in the ipsilateral frontal lobes, with one also showing migrational abnormality. In a third patient, CT revealed white matter abnormality in the ipsilateral parietal lobe. In one patient with progressive facial hemiatrophy, CT and MR scans showed the underlying hypoplastic left maxillary antrum and cheek. Imaging studies of areas of clinical concern revealed positive findings in half our patients. (orig.)

  17. Image Feature Extraction Method Based on SFA and GLCM%基于SFA和GLCM的影像特征提取方法

    Institute of Scientific and Technical Information of China (English)

    鄢圣藜; 霍宏; 方涛

    2011-01-01

    针对遥感影像中同类样本差异性较大的缺点,提出一种基于SFA和灰度共生矩阵(GLCM)的遥感影像特征提取方法.对原始图像进行SFA变换,利用SFA的生物视觉特性消除图像中的同类差异性,对变换得到的图像进行GLCM计算,获得基于SFA和GLCM的新型特征.实验结果证明,SFA预处理能降低遥感影像的同类差异性,提高特征的可区分性,其效果优于传统的GLCM特征提取方法.%As there are still many difference between the remote sensing image from the same class, this paper proposes a new method of extracting features based on Slow Feature Analysis(SFA) and Gray Level Co-occurrence Matrix(GLCM). The image is first processed with SEA algorithm. It can eliminate the difference of the object from the same class as the biological characteristics of SPA. Then the GLCM feature is extracted from the SFA data. Results indicate that with the preprocessing of SFA, it can effectively reduce the diversity of samples from the same class and increase the distinguishability of the feature, the method is more effective and competitive than the conventional GLCM feature extraction method.

  18. Concrete Slump Classification using GLCM Feature Extraction

    Science.gov (United States)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  19. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  20. Diagnostic Efficacy of All Series of Dynamic Contrast Enhanced Breast MR Images Using Gradient Vector Flow (GVF Segmentation and Novel Border Feature Extraction for Differentiation Between Malignant

    Directory of Open Access Journals (Sweden)

    L. Bahreini

    2010-12-01

    Full Text Available Background/Objective: To discriminate between malignant and benign breast lesions;"nconventionally, the first series of Breast Subtraction Dynamic Contrast-Enhanced Magnetic"nResonance Imaging (BS DCE-MRI images are used for quantitative analysis. In this study, we"ninvestigated whether using all series of these images could provide us with more diagnostic"ninformation."nPatients and Methods: This study included 60 histopathologically proven lesions. The steps of"nthis study were as follows: selecting the regions of interest (ROI, segmentation using Gradient"nVector Flow (GVF snake for the first time, defining new feature sets, using artificial neural network"n(ANN for optimal feature set selection, evaluation using receiver operating characteristic (ROC"nanalysis."nResults: The results showed GVF snake method correctly segmented 95.3% of breast lesion"nborders at the overlap threshold of 0.4. The first classifier which used the optimal feature set"nextracted only from the first series of BS DCE-MRI images achieved an area under the curve"n(AUC of 0.82, specificity of 60% at sensitivity of 81%. The second classifier which used the same"noptimal feature set but was extracted from all five series of these images achieved an AUC of"n0.90, specificity of 79% at sensitivity of 81%."nConclusion: The result of GVF snake segmentation showed that it could make an accurate"nsegmentation in the borders of breast lesions. According to this study, using all five series of BS"nDCE-MRI images could provide us with more diagnostic information about the breast lesion and"ncould improve the performance of breast lesion classifiers in comparison with using the first"nseries alone.

  1. Identifying Image Manipulation Software from Image Features

    Science.gov (United States)

    2015-03-26

    an overview of the DCT based encoding process [5]. When an image is processed by lossless compression, a file’s size is reduced while still...IDENTIFYING IMAGE MANIPULATION SOFTWARE FROM IMAGE FEATURES THESIS Devlin T. Boyter, CPT, USA AFIT-ENG-MS-15-M-051 DEPARTMENT OF THE AIR FORCE AIR...to copyright protection in the United States. AFIT-ENG-MS-15-M-051 IDENTIFYING IMAGE MANIPULATION SOFTWARE FROM IMAGE FEATURES THESIS Presented to

  2. 指纹图像特征点提取的改进算法%A method for Feature Point Extraction of Fingerprint Image

    Institute of Scientific and Technical Information of China (English)

    王建英

    2013-01-01

    Because of the interference in image quality and noise, there are lots of spurious minutiae in the feature extraction of the fingerprint image. Existing false feature point, makes the matching speed greatly reduced, and causes the performance of the recognition system bad. This paper proposes a method to remove the pseudo fingerprint feature points effectively, retained the true features, and improve the efficiency of the fingerprint recognition.%  在提取指纹图像的细节特征中,由于图像质量和噪声的干扰,存在大量的伪特征点,伪特征点的存在,不仅会使匹配的速度大大降低,还会造成识别系统的拒真率和误识率的上升,本文提出了一种去伪特征点的方法,有效的去除了指纹的伪特征点,保留了真特征点,提高了指纹识别的效率。

  3. ON IMAGES FEATURE EXTRACTION FOR LEAVES WITH COTTON MITE DISEASES%棉花红螨病害叶部图像特征提取研究

    Institute of Scientific and Technical Information of China (English)

    宋寅卯; 刁智华; 王云鹏; 王欢

    2013-01-01

    Currently,the identification of plant disease in cotton production is mainly relying on visual observation,and the subjective judgement plays a dominant role.To realise timely and reliable diagnosis of the cotton mite disease,we study the images feature extraction of leaves with mite disease based on computer image processing technology.First,the histogram is employed to extract the means and variances of the hue H and G/R of leaves images for the colour features.Secondly,the gray co-matrix algorithm is used to extract the entropy and inertia moment of gray images of leaves for the texture features.Experiments indicate that the feature values mentioned above can well distinguish the cotton mite disease leaves from the normal leaves.Applying the method to cotton mite disease diagnosis will greatly improve the accuracy of disease identification.It also has the significance for effective governance on cotton mite disease.%目前在棉花生产中辨别病害主要以目测为主,主观判断占据主导.为了实现棉花红螨病害及时、可靠的诊断,基于计算机图像处理技术对病害叶部图像特征进行提取研究.首先利用直方图法提取叶片图像G/R和色调H的均值与方差作为颜色特征,其次由灰度共生矩阵法提取叶片灰度图像的熵和惯性矩作为纹理特征.实验表明,以上特征值能较好区分棉花红螨病害叶片和正常叶片.该方法用于棉花红螨病害的诊断,将会大大提高病害识别的准确率,对棉花红螨病害的有效治理有重要意义.

  4. Wood recognition using image texture features.

    Directory of Open Access Journals (Sweden)

    Hang-jun Wang

    Full Text Available Inspired by theories of higher local order autocorrelation (HLAC, this paper presents a simple, novel, yet very powerful approach for wood recognition. The method is suitable for wood database applications, which are of great importance in wood related industries and administrations. At the feature extraction stage, a set of features is extracted from Mask Matching Image (MMI. The MMI features preserve the mask matching information gathered from the HLAC methods. The texture information in the image can then be accurately extracted from the statistical and geometrical features. In particular, richer information and enhanced discriminative power is achieved through the length histogram, a new histogram that embodies the width and height histograms. The performance of the proposed approach is compared to the state-of-the-art HLAC approaches using the wood stereogram dataset ZAFU WS 24. By conducting extensive experiments on ZAFU WS 24, we show that our approach significantly improves the classification accuracy.

  5. Medical image feature extraction and computer aided diagnosis research%医学影像特征提取及计算机辅助诊断研究

    Institute of Scientific and Technical Information of China (English)

    王玉清; 刘忠岐; 王晓夫; 谭丽

    2014-01-01

    目的:实现《医学影像诊断特征的提取及计算机辅助诊断过滤的研究》这一科研项目的软件开发。方法使用开源免费跨平台的面向对象的Java 编程语言和netbeans集成开发环境进行开发。采用Singleton设计模式。结果该项研究实现了由临床特征及医学影像特征到疾病诊断的过程。结论通过计算机技术实现计算机对疾病的辅助诊断是可行的。%Objective To realize the software development of the scientific research project "Study on Extraction of Medical Imaging Features and Computer-Aided Diagnosis and Filtration". Methods The Java programming language and Netbeans integrated development environment from the open source free cross-platform were used for development. The Singleton design mode was used. Results This project realized the process from clinical features and medical imaging features to disease diagnosis. Conclusion Realizing computer-aided disease diagnosis through computer techniques is feasible.

  6. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  7. 基于Radon变换的图象矩特征抽取及其在图象识别中的应用%Moment Feature Extraction of Image Based on Radon Transform and Its Application in Image Recognition

    Institute of Scientific and Technical Information of China (English)

    王耀明; 严炜; 俞时权

    2001-01-01

    his article introduces Radon transform of image and calculating method of moments under radon transform. Using the anti-interference character of Radon transform, this article puts forward a method of extracting image's moment feature which can get a moment feature matrix of image under Radon transform. At last this article gives us a method recognizing image by using the SV of this matrix.%介绍了图象的Radon变换以及在Radon变换下图象矩的计算方法;利用Radon变换的抗干扰特性,提出一种图象矩特征的抽取方法,以得到图象在Radon变换下的矩特征矩阵;进而提出了一种利用该矩阵的奇异值进行图象识别的方法.

  8. An Image Retrieval Method Using DCT Features

    Institute of Scientific and Technical Information of China (English)

    樊昀; 王润生

    2002-01-01

    In this paper a new image representation for compressed domain image re-trieval and an image retrieval system are presented. To represent images compactly and hi-erarchically, multiple features such as color and texture features directly extracted from DCTcoefficients are structurally organized using vector quantization. To train the codebook, a newMinimum Description Length vector quantization algorithm is used and it automatically decidesthe number of code words. To compare two images using the proposed representation, a newefficient similarity measure is designed. The new method is applied to an image database with1,005 pictures. The results demonstrate that the method is better than two typical histogrammethods and two DCT-based image retrieval methods.

  9. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  10. Imaging features of thalassemia

    Energy Technology Data Exchange (ETDEWEB)

    Tunaci, M.; Tunaci, A.; Engin, G.; Oezkorkmaz, B.; Acunas, G.; Acunas, B. [Dept. of Radiology, Istanbul Univ. (Turkey); Dincol, G. [Dept. of Internal Medicine, Istanbul Univ. (Turkey)

    1999-07-01

    Thalassemia is a kind of chronic, inherited, microcytic anemia characterized by defective hemoglobin synthesis and ineffective erythropoiesis. In all thalassemias clinical features that result from anemia, transfusional, and absorptive iron overload are similar but vary in severity. The radiographic features of {beta}-thalassemia are due in large part to marrow hyperplasia. Markedly expanded marrow space lead to various skeletal manifestations including spine, skull, facial bones, and ribs. Extramedullary hematopoiesis (ExmH), hemosiderosis, and cholelithiasis are among the non-skeletal manifestations of thalassemia. The skeletal X-ray findings show characteristics of chronic overactivity of the marrow. In this article both skeletal and non-skeletal manifestations of thalassemia are discussed with an overview of X-ray findings, including MRI and CT findings. (orig.)

  11. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiong; He Gui-ming; Zhang Yun

    2003-01-01

    In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  12. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang; Xiong; He; Gui-Ming; 等

    2003-01-01

    In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  13. Extraction and assessment of chatter feature

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Presents feature wavelet packets(FWP)a new method of chatter feature extraction in milling process based on wavelet packets transform(WPF)and using vibration signal.Studies the procedure of automatic feature selection for a given process.Establishes an exponential autoregressive(EAR)model to extract limit cycle behavior of chatter since chatter is a nonlinear oscillation with limit cycle.And gives a way to determine FWTsnumber,and experimental data to assess the effectiveness of the WPT feature extraction by unforced response of EAR model of reconstructed signal.

  14. Straight line feature based image distortion correction

    Institute of Scientific and Technical Information of China (English)

    Zhang Haofeng; Zhao Chunxia; Lu Jianfeng; Tang Zhenmin; Yang Jingyu

    2008-01-01

    An image distortion correction method is proposed, which uses the straight line features. Many parallel lines of different direction from different images were extracted, and then were used to optimize the distortion parameters by nonlinear least square. The thought of step by step was added when the optimization method working. 3D world coordi-nation is not need to know, and the method is easy to implement. The experiment result shows its high accuracy.

  15. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Departments of Biomedical Engineering and Electrical and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  16. Image Mining Using Texture and Shape Feature

    Directory of Open Access Journals (Sweden)

    Prof.Rupali Sawant

    2010-12-01

    Full Text Available Discovering knowledge from data stored in typical alphanumeric databases, such as relational databases, has been the focal point of most of the work in database mining. However, with advances in secondary and tertiary storage capacity, coupled with a relatively low storage cost, more and more non standard data (in the form of images is being accumulated. This vast collection of image data can also be mined to discover new and valuable knowledge. During theprocess of image mining, the concepts in different hierarchiesand their relationships are extracted from different hierarchies and granularities, and association rule mining and concept clustering are consequently implemented. The generalization and specialization of concepts are realized in different hierarchies, lower layer concepts can be upgraded to upper layer concepts, and upper layer concepts guide the extraction of lower layer concepts. It is a process from image data to image information, from image information to imageknowledge, from lower layer concepts to upper layer concept lattice and cloud model theory is proposed. The methods of image mining from image texture and shape features are introduced here, which include the following basic steps: firstly pre-process images secondly use cloud model to extract concepts, lastly use concept lattice to extracta series of image knowledge.

  17. Design Approach for Content-based Image Retrieval using Gabor-Zernike features

    Directory of Open Access Journals (Sweden)

    Abhinav Deshpande

    2012-04-01

    Full Text Available The process of extraction of different features from an image is known as Content-based Image Retrieval.Color,Texture and Shape are the major features of an image and play a vital role in the representation of an image..In this paper, a novel method is proposed to extract the region of interest(ROI from an image,prior to extraction of salient features of an image.The image is subjected to normalization so that the noise components due to Gaussian or other types of noises which are present in the image are eliminated and thesuccessfull extraction of various features of an image can be accomplished. Gabor Filters are used to extract the texture feature from an image whereas Zernike Moments can be used to extract the shape feature.The combination of Gabor feature and Zernike feature can be combined to extract Gabor-Zernike Features from an image.

  18. Feature extraction from multiple data sources using genetic programming.

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  19. Fast SIFT design for real-time visual feature extraction.

    Science.gov (United States)

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.

  20. Image feature detectors and descriptors foundations and applications

    CERN Document Server

    Hassaballah, Mahmoud

    2016-01-01

    This book provides readers with a selection of high-quality chapters that cover both theoretical concepts and practical applications of image feature detectors and descriptors. It serves as reference for researchers and practitioners by featuring survey chapters and research contributions on image feature detectors and descriptors. Additionally, it emphasizes several keywords in both theoretical and practical aspects of image feature extraction. The keywords include acceleration of feature detection and extraction, hardware implantations, image segmentation, evolutionary algorithm, ordinal measures, as well as visual speech recognition. .

  1. 一种单目视频下的人体量测参数计算方法%Anthropometric features extracted from calibrated image sequences by single camera

    Institute of Scientific and Technical Information of China (English)

    刘少华; 杜奎

    2013-01-01

    提出了一种在单目标定图像序列中提取身体量测参数的方法.该方法将人体掩膜图像简化为人体线模型,从而得到人体特征点,包括头顶点、肩点、重心点和脚点;通过反投影计算出特征点在现实世界中的空间坐标,从而得到身体量测参数,包括身高、肩高、肩宽和步幅.实验结果表明:该算法提取的身体量测参数误差较小,有较强的可用性.%This paper proposes a method for extracting anthropometric parameters from the calibrated image sequences by a single camera.The method can reduce the masking image of human body to a human line model so as to deal with the key points of human body,such as the top of head,shoulder,barycentric point and foot.The coordinates based on the key points obtained from antiprojection are used to display the anthropometric features such as stature,shoulder breadth and height,and pace.The experiment results show that the method is available with little error in anthropometric dimension.

  2. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  3. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  4. ECG Feature Extraction Techniques - A Survey Approach

    CERN Document Server

    Karpagachelvi, S; Sivakumar, M

    2010-01-01

    ECG Feature Extraction plays a significant role in diagnosing most of the cardiac diseases. One cardiac cycle in an ECG signal consists of the P-QRS-T waves. This feature extraction scheme determines the amplitudes and intervals in the ECG signal for subsequent analysis. The amplitudes and intervals value of P-QRS-T segment determines the functioning of heart of every human. Recently, numerous research and techniques have been developed for analyzing the ECG signal. The proposed schemes were mostly based on Fuzzy Logic Methods, Artificial Neural Networks (ANN), Genetic Algorithm (GA), Support Vector Machines (SVM), and other Signal Analysis techniques. All these techniques and algorithms have their advantages and limitations. This proposed paper discusses various techniques and transformations proposed earlier in literature for extracting feature from an ECG signal. In addition this paper also provides a comparative study of various methods proposed by researchers in extracting the feature from ECG signal.

  5. 用于拷贝检测的鲁棒图像特征提取方法%A Robust Feature Extraction Scheme in Image Copy Detection

    Institute of Scientific and Technical Information of China (English)

    余艳玮; 周学海; 许华杰

    2013-01-01

    现有的图像拷贝检测方法中多数对类似噪声的图像失真具有较强的鲁棒性,而对几何类失真则比较脆弱.邹复好等人提出的基于圆环分区的顺序测度拷贝检测方法,在抵抗旋转和等比缩放失真方面性能良好,但是对于拉伸以及在旋转、缩放的同时进行保持图像中心不变的适当裁剪的混合攻击的性能不太好,这主要是由于用于划分圆环的圆形区域无法在这些攻击之后再现.因而本文提出了用于拷贝检测的鲁棒特征提取方法,通过自适应地选择几何不变的圆形区域,再将该圆形区域划分为多个圆环,并计算其特征的顺序测度作为图像特征向量,以增强Zou方法抗几何攻击的鲁棒性.实验结果表明,在抵抗旋拉伸以及在旋转、缩放的同时进行保持图像中心不变的适当裁剪的混合攻击等失真方面,该方法均好于邹复好的方法.%Most of current image copy detection methods can resist to the noise-like distortion, but they are quite fragile to geometric distortion, such as rotation, shift, translate, scale, cropping and so on. Zou et al. proposed a novel image copy detection scheme which combines cirque division strategy with ordinal measure method to extract compact image feature, which have good robustness against rotation and scaling. However, Zou's method can not resist to common distortions, such as aspect ratio change, rotation scaling with cropping, because the circular region divided into rings depends on the image size and is unstable. To conquer the weakness in Zou's method, a robust feature extraction scheme is proposed. A robust circular region is firstly adaptively constructed and divided to concentric rings, then ordinal measure of rings is computed as image feature vector in order to improve the robustness. The experimental results show the proposed method is better than that of Zou at the aspect of aspect ratio change, rotation scaling with cropping.

  6. 一类光照不均图像的特征提取方法%Feature Extraction for a Class of Uneven Illumination Image

    Institute of Scientific and Technical Information of China (English)

    吴金杰; 杨翠荣; 杨勇; 庞全

    2011-01-01

    Changes in lighting are unavoidable and they have a big effect on the way the object looks. It makes a challenge to find a robust local invariant feature descriptor for these uneven illumination images. Most of approaches to feature extraction are based on the premise of that the color image should be converted to grayscale. This paper presents a new approach that it introduces the color independent components based on the Kubelka-Munk model instead of using the gray space to detect the corners. It uses multi-scale Harris corner detection to show the feature of the image, so more details were demonstrated. Experimental results support the potential of the proposed approach.%图像采集过程中因环境光照不佳等原因往往造成的光照不均,同一物体在不同光照条件下成像差异极大,给图像特征提取带来了挑战.为了提高特征提取对光照不均的鲁棒性,提出了基于色彩衡量的特征检测方法.根据Kubelka-Munk光谱辐射理论,分析计算颜色的空间结构和光谱结构,利用高斯颜色模型估算得到色彩衡量取代灰度图作为信息输入,并在多个尺度下结合Harris角点检测方法进行角点提取,综合得到图像特征信息.实现结果显示,相比传统的特征检测方法,该算法得到的特征点具有数量多、分布均匀及鲁棒性强等优势,较好的解决了光照不均带来的影响.

  7. Accurate Image Retrieval Algorithm Based on Color and Texture Feature

    Directory of Open Access Journals (Sweden)

    Chunlai Yan

    2013-06-01

    Full Text Available Content-Based Image Retrieval (CBIR is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature of the image, CBIR aims to find images that contain specified content (feature in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.

  8. Imaging features of aggressive angiomyxoma

    Energy Technology Data Exchange (ETDEWEB)

    Jeyadevan, N.N.; Sohaib, S.A.A.; Thomas, J.M.; Jeyarajah, A.; Shepherd, J.H.; Fisher, C

    2003-02-01

    AIM: To describe the imaging features of aggressive angiomyxoma in a rare benign mesenchymal tumour most frequently arising from the perineum in young female patients. MATERIALS AND METHODS: We reviewed the computed tomography (CT) and magnetic resonance (MR) imaging features of patients with aggressive angiomyxoma who were referred to our hospital. The imaging features were correlated with clinical information and pathology in all patients. RESULTS: Four CT and five MR studies were available for five patients (all women, mean age 39, range 24-55). Three patients had recurrent tumour at follow-up. CT and MR imaging demonstrated a well-defined mass-displacing adjacent structures. The tumour was of low attenuation relative to muscle on CT. On MR, the tumour was isointense relative to muscle on T1-weighted image, hyperintense on T2-weighted image and enhanced avidly after gadolinium contrast with a characteristic 'swirled' internal pattern. MR imaging demonstrates the extent of the tumour and its relation to the pelvic floor. Recurrent tumour has a similar appearance to the primary lesion. CONCLUSION: The MR appearances of aggressive angiomyxomas are characteristic, and the diagnosis should be considered in any young woman presenting with a well-defined mass arising from the perineum. Jeyadevan, N. N. etal. (2003). Clinical Radiology58, 157--162.

  9. Image retrieval using both color and texture features

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In order to improve the retrieval performance of images, this paper proposes an efficient approach for extracting and retrieving color images. The block diagram of our proposed approach to content-based image retrieval (CBIR) is given firstly, and then we introduce three image feature extracting arithmetic including color histogram, edge histogram and edge direction histogram, the histogram Euclidean distance, cosine distance and histogram intersection are used to measure the image level similarity. On the basis of using color and texture features separately, a new method for image retrieval using combined features is proposed. With the test for an image database including 766 general-purpose images and comparison and analysis of performance evaluation for features and similarity measures, our proposed retrieval approach demonstrates a promising performance. Experiment shows that combined features are superior to every single one of the three features in retrieval.

  10. Onboard Image Registration from Invariant Features

    Science.gov (United States)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  11. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  12. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  13. Combining Multiple Feature Extraction Techniques for Handwritten Devnagari Character Recognition

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present an OCR for Handwritten Devnagari Characters. Basic symbols are recognized by neural classifier. We have used four feature extraction techniques namely, intersection, shadow feature, chain code histogram and straight line fitting features. Shadow features are computed globally for character image while intersection features, chain code histogram features and line fitting features are computed by dividing the character image into different segments. Weighted majority voting technique is used for combining the classification decision obtained from four Multi Layer Perceptron(MLP) based classifier. On experimentation with a dataset of 4900 samples the overall recognition rate observed is 92.80% as we considered top five choices results. This method is compared with other recent methods for Handwritten Devnagari Character Recognition and it has been observed that this approach has better success rate than other methods.

  14. Image Feature Extraction Algorithm of Multi-scale Pyramid Based on Phase Congruency%基于相位一致的多尺度金字塔图像特征提取

    Institute of Scientific and Technical Information of China (English)

    黄蕾; 邹海

    2015-01-01

    图像特征提取是数字图像处理与模式识别领域中的关键问题,各种特征提取方法层出不穷。相位一致图像特征提取方法是基于局部相位信息进行图像特征提取,具有亮度和对比度不变性的优点,但是在轮廓特征提取方面存在缺陷。考虑到多分辨率、多尺度对图像特征提取的影响,提出一种基于相位一致的多尺度金字塔图像特征提取算法,新算法的关键在于拉普拉斯金字塔的分解和多尺度特征图像的融合。实验结果表明,该算法在提取图像轮廓特征方面要优于传统的相位一致图像特征提取算法。%Image feature extraction is the key issue in the field of digital image processing and pattern recognition. The feature extraction methods are emerging in endlessly. Phase congruency image feature extraction method is based on local phase information for feature ex-traction,which has advantage of brightness and contrast invariance,but there are still insufficient for contour feature extraction. In order to fully consider the influence of multi-resolution,multi-scale image,present a feature extraction algorithm of multi-scale pyramid based on the phase congruency. The key is Laplacian pyramid decomposition and multi-scale feature fusion. The experimental results show that the new algorithm is superior to phase congruency image feature extraction algorithm in the contour of image feature extracting.

  15. Extraction Method of Shape Feature for Vegetables Based on Depth Image%基于深度图像的蔬果形状特征提取

    Institute of Scientific and Technical Information of China (English)

    李长勇; 曹其新

    2012-01-01

    The method of shape feature extraction based on depth image for the classification of tomatoes shape was proposed. Firstly, the shape of tomatoes was separated from the background through the segmentation of image in color space. Secondly, the point cloud of tomatoes was obtained by unitizing a 3-D machine vision measuring device. In order to implement the shape feature extraction of tomatoes in the same scale, the depth values of tomatoes were normalized. The depth map of tomatoes was formed according to the result of segment and the depth information of tomato. Further the depth map was sampled in polar coordinates and the sampling data was re-plotted in Cartesian coordinates. Finally, the depth image was re-plotted in the form of the Fourier transform in the Cartesian coordinates. The generic Fourier descriptor (GFD) was calculated based on depth map. The descriptor was characterized by the invariance of transformation of translation, rotation and scaling. The GFD based on depth image and the general GFD were successively used in the experiment of tomato grading. The result showed that the mean accuracy of the former classification was up to 92% and higher than the latter.%针对蔬果二维投影图像含形状信息量少而影响蔬果分级精度的问题,提出一种基于深度图像的蔬果形状特征描述方法,以番茄形状特征提取为例,对该方法进行了探讨.首先利用彩色图像信息将番茄从背景中分割出;其次通过三维机器视觉测量设备获取番茄的点云数据,并对待检测番茄的点云数据深度进行归一化处理;然后通过关联被分割出的番茄区域信息与深度信息得到了番茄的深度图,并对该深度图进行极坐标采样.通过在笛卡尔直角坐标下对采样结果进行傅里叶变换,获得了基于深度图像的通用傅里叶形状描述子,该描述子不仅能有效地描述番茄在深度和横向上的形状特征,同时还具有平移、旋转和缩放的不

  16. Linguistic feature analysis for protein interaction extraction

    Directory of Open Access Journals (Sweden)

    Cornelis Chris

    2009-11-01

    Full Text Available Abstract Background The rapid growth of the amount of publicly available reports on biomedical experimental results has recently caused a boost of text mining approaches for protein interaction extraction. Most approaches rely implicitly or explicitly on linguistic, i.e., lexical and syntactic, data extracted from text. However, only few attempts have been made to evaluate the contribution of the different feature types. In this work, we contribute to this evaluation by studying the relative importance of deep syntactic features, i.e., grammatical relations, shallow syntactic features (part-of-speech information and lexical features. For this purpose, we use a recently proposed approach that uses support vector machines with structured kernels. Results Our results reveal that the contribution of the different feature types varies for the different data sets on which the experiments were conducted. The smaller the training corpus compared to the test data, the more important the role of grammatical relations becomes. Moreover, deep syntactic information based classifiers prove to be more robust on heterogeneous texts where no or only limited common vocabulary is shared. Conclusion Our findings suggest that grammatical relations play an important role in the interaction extraction task. Moreover, the net advantage of adding lexical and shallow syntactic features is small related to the number of added features. This implies that efficient classifiers can be built by using only a small fraction of the features that are typically being used in recent approaches.

  17. Extracting Features of Acacia Plantation and Natural Forest in the Mountainous Region of Sarawak, Malaysia by ALOS/AVNIR2 Image

    Science.gov (United States)

    Fadaei, H.; Ishii, R.; Suzuki, R.; Kendawang, J.

    2013-12-01

    The remote sensing technique has provided useful information to detect spatio-temporal changes in the land cover of tropical forests. Land cover characteristics derived from satellite image can be applied to the estimation of ecosystem services and biodiversity over an extensive area, and such land cover information would provide valuable information to global and local people to understand the significance of the tropical ecosystem. This study was conducted in the Acacia plantations and natural forest situated in the mountainous region which has different ecological characteristic from that in flat and low land area in Sarawak, Malaysia. The main objective of this study is to compare extract the characteristic of them by analyzing the ALOS/AVNIR2 images and ground truthing obtained by the forest survey. We implemented a ground-based forest survey at Aacia plantations and natural forest in the mountainous region in Sarawak, Malaysia in June, 2013 and acquired the forest structure data (tree height, diameter at breast height (DBH), crown diameter, tree spacing) and spectral reflectance data at the three sample plots of Acacia plantation that has 10 x 10m area. As for the spectral reflectance data, we measured the spectral reflectance of the end members of forest such as leaves, stems, road surface, and forest floor by the spectro-radiometer. Such forest structure and spectral data were incorporated into the image analysis by support vector machine (SVM) and object-base/texture analysis. Consequently, land covers on the AVNIR2 image were classified into three forest types (natural forest, oil palm plantation and acacia mangium plantation), then the characteristic of each category was examined. We additionally used the tree age data of acacia plantation for the classification. A unique feature was found in vegetation spectral reflectance of Acacia plantations. The curve of the spectral reflectance shows two peaks around 0.3μm and 0.6 - 0.8μm that can be assumed to

  18. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  19. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  20. 共轭面状特征的快速提取与遥感影像精确配准%Fast Extraction of Conjugated Area Features and Accurate Registration of Remote Sensing Image

    Institute of Scientific and Technical Information of China (English)

    辛亮; 张景雄

    2011-01-01

    使用数学形态学的"膨胀算子"对影像进行预处理,提出了一种改进的基于高斯拉普拉斯算子的面状特征提取和细化方法,并利用边界代数快速标注边界封闭的面状特征.在提取面状特征的基础上,利用奇异值分解算法,实现了基于面状质心的遥感影像匹配,进而完成精确配准.实验结果表明,与传统方法相比,此方法在速度与准确度上具有明显优势.%Extraction and matching of conjugate image features is prerequisite for registration of multi-sensors images.Feature of image includes points, lines and polygons.We focus on area feature-based image registration for the reason that area features improves registration accuracy.More importantly, area features are often the sole basis for image registration.We propose using 'dilation' operator in mathematical morphology as a pre-processing procedure to prevent boundaries extracted using conventional Laplacian of Gaussian (LoG) operator from becoming discontinuous.We use a boundary algebra algorithm to mark area features with closed boundaries rapidly.We explory singular value decomposition (SVD) to match images based on centroids of area features extracted beforehand.Experiments confirmed that the proposed methods are superior over conventional methods in terms of speed and accuracy.

  1. APPLYING PRINCIPAL CURVES IN COMPLEX FINGERPRINT IMAGE FEATURE EXTRACTION%主曲线在复杂指纹图像特征提取中的应用

    Institute of Scientific and Technical Information of China (English)

    高迎; 张红云

    2013-01-01

    在自动指纹识别系统中,特征抽取是关键步骤之一。主曲线具有自相合特性,对模式特征能够进行很好的描述,并能够有效维持结构信息。因此,选用推广的多边形主曲线算法并加以改进来提取指纹主曲线,并在此基础上进一步实现指纹特征提取和伪特征检测。实验结果表明,该算法能够在短时间内获得更好的指纹骨架,指纹特征提取的准确率也较高。%In automated fingerprint recognition system , feature extraction is one of the key procedures .Principal curve has the property of self-consistency , can well depict the pattern feature and can effectively keep the structure information .Therefore , we choose the promoted polygonal principal curve algorithm and improve it to extract the principal curve of the fingerprint , and further implement on this basis the fingerprint extraction and pseudo feature inspection .Experimental results demonstrate that the algorithm in this paper can get better fingerprint skeleton in short time period , the accuracy rate of fingerprint feature extraction is higher as well .

  2. Face Feature Extraction for Recognition Using Radon Transform

    Directory of Open Access Journals (Sweden)

    Justice Kwame Appati

    2016-07-01

    Full Text Available Face recognition for some time now has been a challenging exercise especially when it comes to recognizing faces with different pose. This perhaps is due to the use of inappropriate descriptors during the feature extraction stage. In this paper, a thorough examination of the Radon Transform as a face signature descriptor was investigated on one of the standard database. The global features were rather considered by constructing a Gray Level Co-occurrences Matrices (GLCMs. Correlation, Energy, Homogeneity and Contrast are computed from each image to form the feature vector for recognition. We showed that, the transformed face signatures are robust and invariant to the different pose. With the statistical features extracted, face training classes are optimally broken up through the use of Support Vector Machine (SVM whiles recognition rate for test face images are computed based on the L1 norm.

  3. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  4. Local features for enhancement and minutiae extraction in fingerprints.

    Science.gov (United States)

    Fronthaler, Hartwig; Kollreider, Klaus; Bigun, Josef

    2008-03-01

    Accurate fingerprint recognition presupposes robust feature extraction which is often hampered by noisy input data. We suggest common techniques for both enhancement and minutiae extraction, employing symmetry features. For enhancement, a Laplacian-like image pyramid is used to decompose the original fingerprint into sub-bands corresponding to different spatial scales. In a further step, contextual smoothing is performed on these pyramid levels, where the corresponding filtering directions stem from the frequency-adapted structure tensor (linear symmetry features). For minutiae extraction, parabolic symmetry is added to the local fingerprint model which allows to accurately detect the position and direction of a minutia simultaneously. Our experiments support the view that using the suggested parabolic symmetry features, the extraction of which does not require explicit thinning or other morphological operations, constitute a robust alternative to conventional minutiae extraction. All necessary image processing is done in the spatial domain using 1-D filters only, avoiding block artifacts that reduce the biometric information. We present comparisons to other studies on enhancement in matching tasks employing the open source matcher from NIST, FIS2. Furthermore, we compare the proposed minutiae extraction method with the corresponding method from the NIST package, mindtct. A top five commercial matcher from FVC2006 is used in enhancement quantification as well. The matching error is lowered significantly when plugging in the suggested methods. The FVC2004 fingerprint database, notable for its exceptionally low-quality fingerprints, is used for all experiments.

  5. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  6. A NOVEL REGION FEATURE USED IN MULTISENSOR IMAGE FUSION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new region feature which emphasized the salience of target region and its neighbors is proposed.In region segmentation-based multisensor image fusion scheme, the presented feature can be extracted from each segmented region to determine the fusion weight. Experimental results demonstrate that the proposed feature has extensive application scope and it provides much more information for each region. It can not only be used in image fusion but also be used in other image processing applications.

  7. Extracting Product Features from Chinese Product Reviews

    Directory of Open Access Journals (Sweden)

    Yahui Xi

    2013-12-01

    Full Text Available With the great development of e-commerce, the number of product reviews grows rapidly on the e-commerce websites. Review mining has recently received a lot of attention, which aims to discover the valuable information from the massive product reviews. Product feature extraction is one of the basic tasks of product review mining. Its effectiveness can influence significantly the performance of subsequent jobs. Double Propagation is a state-of-the-art technique in product feature extraction. In this paper, we apply the Double Propagation to the product feature exaction from Chinese product reviews and adopt some techniques to improve the precision and recall. First, indirect relations and verb product features are introduced to increase the recall. Second, when ranking candidate product features by using HITS, we expand the number of hubs by means of the dependency relation patterns between product features and opinion words to improve the precision. Finally, the Normalized Pattern Relevance is employed to filter the exacted product features. Experiments on diverse real-life datasets show promising results

  8. Feature Extraction and Selection From the Perspective of Explosive Detection

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used

  9. Featured Image: A Comet's Coma

    Science.gov (United States)

    Kohler, Susanna

    2016-11-01

    This series of images (click for the full view!) features the nucleus of comet 67P/Churymov-Gerasimenko. The images were taken with the Wide Angle Camera of RosettasOSIRIS instrument asRosetta orbited comet 67P. Each column represents a different narrow-band filter that allows us to examine the emission of a specific fragment species, and the images progress in time from January 2015 (top) to June 2015 (bottom). In a recent study, Dennis Bodewits (University of Maryland) and collaborators used these images to analyze the comets inner coma, the cloud of gas and dust produced around the nucleus as ices sublime. OSIRISs images allowed the team to explore how the 67Ps inner coma changed over time as the comet approached the Sun marking the first time weve been able to study such an environment at this level of detail. To read more about what Bodewits and collaborators learned, you can check out their paper below!CitationD. Bodewits et al 2016 AJ 152 130. doi:10.3847/0004-6256/152/5/130

  10. Feature extraction for structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois [Los Alamos National Laboratory; Farrar, Charles [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Nishio, Mayuko [UNIV OF TOKYO; Worden, Keith [UNIV OF SHEFFIELD; Takeda, Nobuo [UNIV OF TOKYO

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  11. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  12. Feature representation of RGB-D images using joint spatial-depth feature pooling

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2016-01-01

    utilizes depth information only to extract local features, without considering it to improve robustness and discriminability of the feature representation by merging depth cues into feature pooling. Spatial pyramid model (SPM) has become the standard protocol to split a 2D image plane into sub......-regions for feature pooling of RGB-D images. We argue that SPM may not be the optimal pooling scheme for RGB-D images, as it only pools features spatially and completely discards their depth topological structures. Instead, we propose a novel joint spatial-depth pooling (JSDP) scheme which further partitions SPM...

  13. Fixed kernel regression for voltammogram feature extraction

    Science.gov (United States)

    Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.

    2009-12-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.

  14. Analyzing edge detection techniques for feature extraction in dental radiographs

    Directory of Open Access Journals (Sweden)

    Kanika Lakhani

    2016-09-01

    Full Text Available Several dental problems can be detected using radiographs but the main issue with radiographs is that they are not very prominent. In this paper, two well known edge detection techniques have been implemented for a set of 20 radiographs and number of pixels in each image has been calculated. Further, Gaussian filter has been applied over the images to smoothen the images so as to highlight the defect in the tooth. If the images data are available in the form of pixels for both healthy and decayed tooth, the images can easily be compared using edge detection techniques and the diagnosis is much easier. Further, Laplacian edge detection technique is applied to sharpen the edges of the given image. The aim is to detect discontinuities in dental radiographs when compared to original healthy tooth. Future work includes the feature extraction on the images for the classification of dental problems.

  15. Image feature localization by multiple hypothesis testing of Gabor features.

    Science.gov (United States)

    Ilonen, Jarmo; Kamarainen, Joni-Kristian; Paalanen, Pekka; Hamouz, Miroslav; Kittler, Josef; Kälviäinen, Heikki

    2008-03-01

    Several novel and particularly successful object and object category detection and recognition methods based on image features, local descriptions of object appearance, have recently been proposed. The methods are based on a localization of image features and a spatial constellation search over the localized features. The accuracy and reliability of the methods depend on the success of both tasks: image feature localization and spatial constellation model search. In this paper, we present an improved algorithm for image feature localization. The method is based on complex-valued multi resolution Gabor features and their ranking using multiple hypothesis testing. The algorithm provides very accurate local image features over arbitrary scale and rotation. We discuss in detail issues such as selection of filter parameters, confidence measure, and the magnitude versus complex representation, and show on a large test sample how these influence the performance. The versatility and accuracy of the method is demonstrated on two profoundly different challenging problems (faces and license plates).

  16. Minutiae Extraction from Fingerprint Images - a Review

    CERN Document Server

    Bansal, Roli; Bedi, Punam

    2012-01-01

    Fingerprints are the oldest and most widely used form of biometric identification. Everyone is known to have unique, immutable fingerprints. As most Automatic Fingerprint Recognition Systems are based on local ridge features known as minutiae, marking minutiae accurately and rejecting false ones is very important. However, fingerprint images get degraded and corrupted due to variations in skin and impression conditions. Thus, image enhancement techniques are employed prior to minutiae extraction. A critical step in automatic fingerprint matching is to reliably extract minutiae from the input fingerprint images. This paper presents a review of a large number of techniques present in the literature for extracting fingerprint minutiae. The techniques are broadly classified as those working on binarized images and those that work on gray scale images directly.

  17. Automatic Melody Generation System with Extraction Feature

    Science.gov (United States)

    Ida, Kenichi; Kozuki, Shinichi

    In this paper, we propose the melody generation system with the analysis result of an existing melody. In addition, we introduce the device that takes user's favor in the system. The melody generation is done by pitch's being arranged best on the given rhythm. The best standard is decided by using the feature element extracted from existing music by proposed method. Moreover, user's favor is reflected in the best standard by operating some of the feature element in users. And, GA optimizes the pitch array based on the standard, and achieves the system.

  18. Performance Analysis of Texture Image Classification Using Wavelet Feature

    Directory of Open Access Journals (Sweden)

    Dolly Choudhary

    2013-01-01

    Full Text Available This paper compares the performance of various classifiers for multi class image classification. Where the features are extracted by the proposed algorithm in using Haar wavelet coefficient. The wavelet features are extracted from original texture images and corresponding complementary images. As it is really very difficult to decide which classifier would show better performance for multi class image classification. Hence, this work is an analytical study of performance of various classifiers for the single multiclass classification problem. In this work fifteen textures are taken for classification using Feed Forward Neural Network, Naïve Bays Classifier, K-nearest neighbor Classifier and Cascaded Neural Network.

  19. Mining Mid-level Features for Image Classification

    OpenAIRE

    Fernando, Basura; Fromont, Elisa; Tuytelaars, Tinne

    2014-01-01

    International audience; Mid-level or semi-local features learnt using class-level information are potentially more distinctive than the traditional low-level local features constructed in a purely bottom-up fashion. At the same time they preserve some of the robustness properties with respect to occlusions and image clutter. In this paper we propose a new and effective scheme for extracting mid-level features for image classification, based on relevant pattern mining. In par- ticular, we mine...

  20. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...... and heterogeneous, congruent and non-congruent images are used to illustrate how the described methods work and to compare some of them...

  1. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  2. Ship Targets Discrimination Algorithm in SAR Images Based on Hu Moment Feature and Texture Feature

    Directory of Open Access Journals (Sweden)

    Liu Lei

    2016-01-01

    Full Text Available To discriminate the ship targets in SAR images, this paper proposed the method based on combination of Hu moment feature and texture feature. Firstly,7 Hu moment features should be extracted, while gray level co-occurrence matrix is then used to extract the features of mean, variance, uniformity, energy, entropy, inertia moment, correlation and differences. Finally the k-neighbour classifier was used to analysis the 15 dimensional feature vectors. The experimental results show that the method of this paper has a good effect.

  3. A Hybrid method of face detection based on Feature Extraction using PIFR and Feature Optimization using TLBO

    Directory of Open Access Journals (Sweden)

    Kapil Verma

    2016-01-01

    Full Text Available In this paper we proposed a face detection method based on feature selection and feature optimization. Now in current research trend of biometric security used the process of feature optimization for better improvement of face detection technique. Basically our face consists of three types of feature such as skin color, texture and shape and size of face. The most important feature of face is skin color and texture of face. In this detection technique used texture feature of face image. For the texture extraction of image face used partial feature extraction function, these function is most promising shape feature analysis. For the selection of feature and optimization of feature used multi-objective TLBO. TLBO algorithm is population based searching technique and defines two constraints function for the process of selection and optimization. The proposed algorithm of face detection based on feature selection and feature optimization process. Initially used face image data base and passes through partial feature extractor function and these transform function gives a texture feature of face image. For the evaluation of performance our proposed algorithm implemented in MATLAB 7.8.0 software and face image used provided by Google face image database. For numerical analysis of result used hit and miss ratio. Our empirical evaluation of result shows better prediction result in compression of PIFR method of face detection.

  4. Eddy current pulsed phase thermography and feature extraction

    Science.gov (United States)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  5. Features Extraction for Object Detection Based on Interest Point

    Directory of Open Access Journals (Sweden)

    Amin Mohamed Ahsan

    2013-05-01

    Full Text Available In computer vision, object detection is an essential process for further processes such as object tracking, analyzing and so on. In the same context, extraction features play important role to detect the object correctly. In this paper we present a method to extract local features based on interest point which is used to detect key-points within an image, then, compute histogram of gradient (HOG for the region surround that point. Proposed method used speed-up robust feature (SURF method as interest point detector and exclude the descriptor. The new descriptor is computed by using HOG method. The proposed method got advantages of both mentioned methods. To evaluate the proposed method, we used well-known dataset which is Caltech101. The initial result is encouraging in spite of using a small data for training.

  6. [Multiple transmission electron microscopic image stitching based on sift features].

    Science.gov (United States)

    Li, Mu; Lu, Yanmeng; Han, Shuaihu; Wu, Zhuobin; Chen, Jiajing; Liu, Zhexing; Cao, Lei

    2015-08-01

    We proposed a new stitching method based on sift features to obtain an enlarged view of transmission electron microscopic (TEM) images with a high resolution. The sift features were extracted from the images, which were then combined with fitted polynomial correction field to correct the images, followed by image alignment based on the sift features. The image seams at the junction were finally removed by Poisson image editing to achieve seamless stitching, which was validated on 60 local glomerular TEM images with an image alignment error of 62.5 to 187.5 nm. Compared with 3 other stitching methods, the proposed method could effectively reduce image deformation and avoid artifacts to facilitate renal biopsy pathological diagnosis.

  7. Trace Ratio Criterion for Feature Extraction in Classification

    Directory of Open Access Journals (Sweden)

    Guoqi Li

    2014-01-01

    Full Text Available A generalized linear discriminant analysis based on trace ratio criterion algorithm (GLDA-TRA is derived to extract features for classification. With the proposed GLDA-TRA, a set of orthogonal features can be extracted in succession. Each newly extracted feature is the optimal feature that maximizes the trace ratio criterion function in the subspace orthogonal to the space spanned by the previous extracted features.

  8. Integration of Image-Derived and Pos-Derived Features for Image Blur Detection

    Science.gov (United States)

    Teo, Tee-Ann; Zhan, Kai-Zhi

    2016-06-01

    The image quality plays an important role for Unmanned Aerial Vehicle (UAV)'s applications. The small fixed wings UAV is suffering from the image blur due to the crosswind and the turbulence. Position and Orientation System (POS), which provides the position and orientation information, is installed onto an UAV to enable acquisition of UAV trajectory. It can be used to calculate the positional and angular velocities when the camera shutter is open. This study proposes a POS-assisted method to detect the blur image. The major steps include feature extraction, blur image detection and verification. In feature extraction, this study extracts different features from images and POS. The image-derived features include mean and standard deviation of image gradient. For POS-derived features, we modify the traditional degree-of-linear-blur (blinear) method to degree-of-motion-blur (bmotion) based on the collinear condition equations and POS parameters. Besides, POS parameters such as positional and angular velocities are also adopted as POS-derived features. In blur detection, this study uses Support Vector Machines (SVM) classifier and extracted features (i.e. image information, POS data, blinear and bmotion) to separate blur and sharp UAV images. The experiment utilizes SenseFly eBee UAV system. The number of image is 129. In blur image detection, we use the proposed degree-of-motion-blur and other image features to classify the blur image and sharp images. The classification result shows that the overall accuracy using image features is only 56%. The integration of image-derived and POS-derived features have improved the overall accuracy from 56% to 76% in blur detection. Besides, this study indicates that the performance of the proposed degree-of-motion-blur is better than the traditional degree-of-linear-blur.

  9. Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Wei-Cheng Li; Chung-Lin Huang; Pei-Yeh Chang

    2014-01-01

    In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant’s cranium can be identified by local feature analysis and a two-phase k-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress.

  10. Features extraction in anterior and posterior cruciate ligaments analysis.

    Science.gov (United States)

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK).

  11. Texture Feature Extraction and Classification for Iris Diagnosis

    Science.gov (United States)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  12. Applying Feature Extraction for Classification Problems

    Directory of Open Access Journals (Sweden)

    Foon Chi

    2009-03-01

    Full Text Available With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and the proliferation of cheap, high quality digital cameras it isbecoming ever more desirable to be able to automatically classify images into appropriate categories such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention.However, as with most Artificial Intelligence (A.I. methods it is seen as necessary to take small steps towards your goal. With this in mind a method is proposed here to represent localised features using disjoint sub-images taken from several datasets of retinal images for their eventual use in an incremental learning system. A tile-based localised adaptive threshold selection method was taken for vessel segmentation based on separate colour components. Arteriole-venous differentiation was made possible by using the composite of these components and high quality fundal images. Performance was evaluated on the DRIVE and STARE datasets achieving average specificity of 0.9379 and sensitivity of 0.5924.

  13. Feature Extraction and Analysis of Breast Cancer Specimen

    Science.gov (United States)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  14. Feature Point Accurate Extraction Algorithm of Airplane Image in Residual Ice Detection%残冰检测中飞机图像特征点精确提取算法

    Institute of Scientific and Technical Information of China (English)

    高建树; 杨涛

    2012-01-01

    传统角点检测算法无法精确提取感兴趣的特征点.为此,提出一种新的特征点提取算法.通过曲率角点检测算法提取飞机图像的特征点,并作为待匹配图像,采用像素相关性匹配算法进行特征点提取,利用飞机机身固定结构约束匹配算法去除误匹配对.实验结果表明,该算法具有较好的适应性,能够精确提取特征点.%Traditional corner detection algorithm can not extract the interested feature point. In order to solve this problem, this paper proposes a new template matching algorithm. In this paper, the special points of airplane image are gotten by curvature corner detection algorithm as the matching image. Template matching algorithm is used to extract the best match points. The features of the image with different illumination conditions and shooting angles are extracted and matched. Experimental results show that the algorithm has better adaptability, and improve the accuracy of matching.

  15. 基于频率谱变化量的唐卡图像特征提取与表示%Contour feature extraction and expression for Thangka image based on variation of frequency spectral

    Institute of Scientific and Technical Information of China (English)

    王维兰; 钱建军; 杨旦春; 王念一

    2011-01-01

    Traditional Content-Based Image Retrieval(CBIR) and tracking algorithm mainly uses image color,texture and other features as similarity comparison between two images.However.a large number of experiments and applications also show that it is difficult to precisely control spatial structure and object shape with color and texture for images similarity comparison , and unexpected results are often produced during image retrieving.ln order to enhance precision for image retrieval, an image retrieval method containing features of color and object contour curve is presented. Image is segmented and the contour of interested object in image is extracted,and then the contour is transformed by affine and is processed by the mini-mum.The contour contains the whole information of interested object,and preserves the geometric invariance;with color feature, a histogram for primary cluster is extracted.The histogram extracted contains not only the color information but also space location information for primary cluster.The weighted average for color distance histogram and distance deviation of contour curve is applied as similarity measure between two images.Experiments show that the presented algorithm obtains more robust retrieval precision.%提出了一种在频域空间内用频率谱变化量来表示图像轮廓特征的方法,并在宗教类唐卡图像的头饰分类中得到了成功的应用.标注图像的头饰区域,应用基本全局门限和用户观察直方图选择阈值相结合的方法分割标注区域;将图像中的像素点转换成直角坐标系下的坐标点,并提取典型轮廓点;通过傅里叶变换将典型轮廓点转换到频域空间,应用频率谱值较小的那部分的变化量作为轮廓特征,提取轮廓内部的颜色特征.实验证明,应用所提取的头饰特征可以有效地对唐卡图像进行分类.

  16. Imaging features in Hirayama disease

    Directory of Open Access Journals (Sweden)

    Sonwalkar Hemant

    2008-01-01

    Full Text Available Purpose: To evaluate the MR findings in clinically suspected cases of Hirayama disease. Materials and Methods: The pre and post contrast neutral and flexion position cervical MR images of eight patients of clinically suspected Hirayama disease were evaluated for the following findings: localized lower cervical cord atrophy, asymmetric cord flattening, abnormal cervical curvature, loss of attachment between the posterior dural sac and subjacent lamina, anterior shifting of the posterior wall of the cervical dural canal and enhancing epidural component with flow voids. The distribution of the above features in our patient population was noted and correlated with their clinical presentation and electromyography findings. Observations: Although lower cervical cord atrophy was noted in all eight cases of suspected Hirayama disease, the rest of the findings were variably distributed with asymmetric cord flattening, abnormal cervical curvature, anterior shifting of the posterior wall of the cervical dural canal and enhancing epidural component seen in six out of eight (75% cases. An additional finding of thoracic extension of the enhancing epidural component was also noted in five out of eight cases. Conclusion: Dynamic post contrast MRI evaluation of cervicothoracic spine is an accurate method for the diagnosis of Hirayama disease.

  17. FACE RECOGNITION USING FEATURE EXTRACTION AND NEURO-FUZZY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Ritesh Vyas

    2012-09-01

    Full Text Available Face is a primary focus of attention in social intercourse, playing a major role in conveying identity and emotion. The human ability to recognize faces is remarkable. People can recognize thousands of faces learned throughout their lifetime and identify familiar faces at a glance even after years of separation. This skill is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses, beards or changes in hair style. In this work, a system is designed to recognize human faces depending on their facial features. Also to reveal the outline of the face, eyes and nose, edge detection technique has been used. Facial features are extracted in the form of distance between important feature points. After normalization, these feature vectors are learned by artificial neural network and used to recognize facial image.

  18. AUOTOMATIC CLASSIFICATION OF POINT CLOUDS EXTRACTED FROM ULTRACAM STEREO IMAGES

    OpenAIRE

    M. Modiri; Masumi, M.; A. Eftekhari

    2015-01-01

    Automatic extraction of building roofs, street and vegetation are a prerequisite for many GIS (Geographic Information System) applications, such as urban planning and 3D building reconstruction. Nowadays with advances in image processing and image matching technique by using feature base and template base image matching technique together dense point clouds are available. Point clouds classification is an important step in automatic features extraction. Therefore, in this study, the classific...

  19. MRI and PET image fusion using fuzzy logic and image local features.

    Science.gov (United States)

    Javed, Umer; Riaz, Muhammad Mohsin; Ghafoor, Abdul; Ali, Syed Sohaib; Cheema, Tanveer Ahmed

    2014-01-01

    An image fusion technique for magnetic resonance imaging (MRI) and positron emission tomography (PET) using local features and fuzzy logic is presented. The aim of proposed technique is to maximally combine useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results show that the proposed scheme produces significantly better results compared to state-of-art schemes.

  20. Extraction of photomultiplier-pulse features

    Energy Technology Data Exchange (ETDEWEB)

    Joerg, Philipp; Baumann, Tobias; Buechele, Maximilian; Fischer, Horst; Gorzellik, Matthias; Grussenmeyer, Tobias; Herrmann, Florian; Kremser, Paul; Kunz, Tobias; Michalski, Christoph; Schopferer, Sebastian; Szameitat, Tobias [Physikalisches Institut der Universitaet Freiburg, Freiburg im Breisgau (Germany)

    2013-07-01

    Experiments in subatomic physics have to handle data rates at several MHz per readout channel to reach statistical significance for the measured quantities. Frequently such experiments have to deal with fast signals which may cover large dynamic ranges. For applications which require amplitude as well as time measurements with highest accuracy transient recorders with very high resolution and deep on-board memory are the first choice. We have built a 16-channel 12- or 14 bit single unit VME64x/VXS sampling ADC module which may sample at rates up to 1GS/s. Fast algorithms have been developed and successfully implemented for the readout of the recoil-proton detector at the COMPASS-II Experiment at CERN. We report on the implementation of the feature extraction algorithms and the performance achieved during a pilot with the COMPASS-II Experiment.

  1. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  2. Road marking features extraction using the VIAPIX® system

    Science.gov (United States)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  3. DOCUMENT IMAGE REGISTRATION FOR IMPOSED LAYER EXTRACTION

    Directory of Open Access Journals (Sweden)

    Surabhi Narayan

    2017-02-01

    Full Text Available Extraction of filled-in information from document images in the presence of template poses challenges due to geometrical distortion. Filled-in document image consists of null background, general information foreground and vital information imposed layer. Template document image consists of null background and general information foreground layer. In this paper a novel document image registration technique has been proposed to extract imposed layer from input document image. A convex polygon is constructed around the content of the input and the template image using convex hull. The vertices of the convex polygons of input and template are paired based on minimum Euclidean distance. Each vertex of the input convex polygon is subjected to transformation for the permutable combinations of rotation and scaling. Translation is handled by tight crop. For every transformation of the input vertices, Minimum Hausdorff distance (MHD is computed. Minimum Hausdorff distance identifies the rotation and scaling values by which the input image should be transformed to align it to the template. Since transformation is an estimation process, the components in the input image do not overlay exactly on the components in the template, therefore connected component technique is applied to extract contour boxes at word level to identify partially overlapping components. Geometrical features such as density, area and degree of overlapping are extracted and compared between partially overlapping components to identify and eliminate components common to input image and template image. The residue constitutes imposed layer. Experimental results indicate the efficacy of the proposed model with computational complexity. Experiment has been conducted on variety of filled-in forms, applications and bank cheques. Data sets have been generated as test sets for comparative analysis.

  4. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  5. Feature Selection for Image Retrieval based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Preeti Kushwaha

    2016-12-01

    Full Text Available This paper describes the development and implementation of feature selection for content based image retrieval. We are working on CBIR system with new efficient technique. In this system, we use multi feature extraction such as colour, texture and shape. The three techniques are used for feature extraction such as colour moment, gray level co- occurrence matrix and edge histogram descriptor. To reduce curse of dimensionality and find best optimal features from feature set using feature selection based on genetic algorithm. These features are divided into similar image classes using clustering for fast retrieval and improve the execution time. Clustering technique is done by k-means algorithm. The experimental result shows feature selection using GA reduces the time for retrieval and also increases the retrieval precision, thus it gives better and faster results as compared to normal image retrieval system. The result also shows precision and recall of proposed approach compared to previous approach for each image class. The CBIR system is more efficient and better performs using feature selection based on Genetic Algorithm.

  6. Remote sensing image classification based on block feature point density analysis and multiple-feature fusion

    Science.gov (United States)

    Li, Shijin; Jiang, Yaping; Zhang, Yang; Feng, Jun

    2015-10-01

    With the development of remote sensing (RS) and the related technologies, the resolution of RS images is enhancing. Compared with moderate or low resolution images, high-resolution ones can provide more detailed ground information. However, a variety of terrain has complex spatial distribution. The different objectives of high-resolution images have a variety of features. The effectiveness of these features is not the same, but some of them are complementary. Considering the above information and characteristics, a new method is proposed to classify RS images based on hierarchical fusion of multi-features. Firstly, RS images are pre-classified into two categories in terms of whether feature points are uniformly or non-uniformly distributed. Then, the color histogram and Gabor texture feature are extracted from the uniformly-distributed categories, and the linear spatial pyramid matching using sparse coding (ScSPM) feature is obtained from the non-uniformly-distributed categories. Finally, the classification is performed by two support vector machine classifiers. The experimental results on a large RS image database with 2100 images show that the overall classification accuracy is boosted by 10.1% in comparison with the highest accuracy of single feature classification method. Compared with other multiple-feature fusion methods, the proposed method has achieved the highest classification accuracy on this dataset which has reached 90.1%, and the time complexity of the algorithm is also greatly reduced.

  7. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  8. Wilson’s disease: Atypical imaging features

    Directory of Open Access Journals (Sweden)

    Venugopalan Y Vishnu

    2016-10-01

    Full Text Available Wilson’s disease is a genetic movement disorder with characteristic clinical and imaging features. We report a 17- year-old boy who presented with sialorrhea, hypophonic speech, paraparesis with repeated falls and recurrent seizures along with cognitive decline. He had bilateral Kayser Flescher rings. Other than the typical features of Wilson’s disease in cranial MRI, there were extensive white matter signal abnormalities (T2 and FLAIR hyperintensities and gyriform contrast enhancement which are rare imaging features in Wilson's disease. A high index of suspicion is required to diagnose Wilson’s disease when atypical imaging features are present.

  9. Extraction of Facial Feature Points Using Cumulative Histogram

    CERN Document Server

    Paul, Sushil Kumar; Bouakaz, Saida

    2012-01-01

    This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nos...

  10. Image segmentation using association rule features.

    Science.gov (United States)

    Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J

    2002-01-01

    A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.

  11. Image fusion using sparse overcomplete feature dictionaries

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  12. Texture Feature Extraction Method Combining Nonsubsampled Contour Transformation with Gray Level Co-occurrence Matrix

    Directory of Open Access Journals (Sweden)

    Xiaolan He

    2013-12-01

    Full Text Available Gray level co-occurrence matrix (GLCM is an important method to extract the image texture features of synthetic aperture radar (SAR. However, GLCM can only extract the textures under single scale and single direction. A kind of texture feature extraction method combining nonsubsampled contour transformation (NSCT and GLCM is proposed, so as to achieve the extraction of texture features under multi-scale and multi-direction. We firstly conducted multi-scale and multi-direction decomposition on the SAR images with NSCT, secondly extracted the symbiosis amount with GLCM from the obtained sub-band images, then conducted the correlation analysis for the extracted symbiosis amount to remove the redundant characteristic quantity; and combined it with the gray features to constitute the multi-feature vector. Finally, we made full use of the advantages of the support vector machine in the aspects of small sample database and generalization ability, and completed the division of multi-feature vector space by SVM so as to achieve the SAR image segmentation. The results of the experiment showed that the segmentation accuracy rate could be improved and good edge retention effect could be obtained through using the GLCM texture extraction method based on NSCT domain and multi-feature fusion in the SAR image segmentation.

  13. Point features extraction: towards slam for an autonomous underwater vehicle

    CSIR Research Space (South Africa)

    Matsebe, O

    2010-07-01

    Full Text Available Page 1 of 11 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa POINT FEATURES EXTRACTION: TOWARDS SLAM FOR AN AUTONOMOUS UNDERWATER VEHICLE O. Matsebe1,2, M... Page 2 of 11 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa vehicle is equipped with a Mechanically Scanned Imaging Sonar (Micron DST Sonar) which is able...

  14. A fingerprint feature extraction algorithm based on curvature of Bezier curve

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Fingerprint feature extraction is a key step of fingerprint identification. A novel feature extraction algorithm is proposed in this paper, which describes fingerprint feature with the bending information of fingerprint ridges. Ridges in the specific region of fingerprint images are traced firstly in the algorithm, and then these ridges are fit with Bezier curve. Finally, the point that has the maximal curvature on Bezier curve is defined as a feature point. Experimental results demonstrate that this kind of feature points characterize the bending trend of fingerprint ridges effectively, and they are robust to noise, in addition, the extraction precision of this algorithm is also better than the conventional approaches.

  15. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    Science.gov (United States)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  16. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  17. Imaging features of iliopsoas bursitis

    Energy Technology Data Exchange (ETDEWEB)

    Wunderbaldinger, P. [Department of Radiology, University of Vienna (Austria); Center of Molecular Imaging Research, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA (United States); Bremer, C. [Department of Radiology, University of Muenster (Germany); Schellenberger, E. [Center of Molecular Imaging Research, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA (United States); Department of Radiology, Martin-Luther University of Halle-Wittenberg, Halle (Germany); Cejna, M.; Turetschek, K.; Kainberger, F. [Department of Radiology, University of Vienna (Austria)

    2002-02-01

    The aim of this study was firstly to describe the spectrum of imaging findings seen in iliopsoas bursitis, and secondly to compare cross-sectional imaging techniques in the demonstration of the extent, size and appearance of the iliopsoas bursitis as referenced by surgery. Imaging studies of 18 patients (13 women, 5 men; mean age 53 years) with surgically proven iliopsoas bursitis were reviewed. All patients received conventional radiographs of the pelvis and hip, US and MR imaging of the hip. The CT was performed in 5 of the 18 patients. Ultrasound, CT and MR all demonstrated enlarged iliopsoas bursae. The bursal wall was thin and well defined in 83% and thickened in 17% of all cases. The two cases with septations on US were not seen by CT and MRI. A communication between the bursa and the hip joint was seen, and surgically verified, in all 18 patients by MR imaging, whereas US and CT failed to demonstrate it in 44 and 40% of the cases, respectively. Hip joint effusion was seen and verified by surgery in 16 patients by MRI, whereas CT (4 of 5) and US (n=12) underestimated the number. The overall size of the bursa corresponded best between MRI and surgery, whereas CT and US tended to underestimate the size. Contrast enhancement of the bursal wall was seen in all cases. The imaging characteristics of iliopsoas bursitis are a well-defined, thin-walled cystic mass with a communication to the hip joint and peripheral contrast enhancement. The most accurate way to assess iliopsoas bursitis is with MR imaging; thus, it should be used for accurate therapy planning and follow-up studies. In order to initially prove an iliopsoas bursitis, US is the most cost-effective, easy-to-perform and fast alternative. (orig.)

  18. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    Science.gov (United States)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  19. Features Selection for Skin Micro-Image Symptomatic Recognition

    Institute of Scientific and Technical Information of China (English)

    HU Yue-li; CAO Jia-lin; ZHAO Qian; FENG Xu

    2004-01-01

    Automatic recognition of skin micro-image symptom is important in skin diagnosis and treatment. Feature selection is to improve the classification performance of skin micro-image symptom.This paper proposes a hybrid approach based on the support vector machine (SVM) technique and genetic algorithm (GA) to select an optimum feature subset from the feature group extracted from the skin micro-images. An adaptive GA is introduced for maintaining the convergence rate. With the proposed method, the average cross validation accuracy is increased from 88.25% using all features to 96.92 % using only selected features provided by a classifier for classification of 5 classes of skin symptoms. The experimental results are satisfactory.

  20. Features Selection for Skin Micro-Image Symptomatic Recognition

    Institute of Scientific and Technical Information of China (English)

    HUYue-li; CAOJia-lin; ZHAOQian; FENGXu

    2004-01-01

    Automatic recognition of skin micro-image symptom is important in skin diagnosis and treatment. Feature selection is to improve the classification performance of skin micro-image symptom.This paper proposes a hybrid approach based on the support vector machine (SVM) technique and genetic algorithm (GA) to select an optimum feature subset from the feature group extracted from the skin micro-images. An adaptive GA is introduced for maintaining the convergence rate. With the proposed method, the average cross validation accuracy is increased from 88.25% using all features to 96.92% using only selected features provided by a classifier for classification of 5 classes of skin symptoms. The experimental results are satisfactory.

  1. The Extraction of Several Features of X-Ray Image of Thighbone's Near End%股骨近端X-Ray图像若干特征的提取

    Institute of Scientific and Technical Information of China (English)

    柳崎峰; 王凤儒; 柳峻峰

    2001-01-01

    股骨近端X-Ray图像若干特征的提取是实现计算机控制下的股骨近端骨折自动整复的基本前提.针对股骨近端X-Ray图像的特点,对图像进行一种能够保持边缘的滤波去噪和阈值自动选取的局部二值化,为搜索关键点设计了小窗口内滑算法,从而计算出股骨体的方向和宽度特征.实验表明,该方法简单、快速、有效.%The extraction of several features of X- Ray image of thighbone's near end is the essential premise of automatic fracture correction of thighbone's near end by computer control. In order to extract these features, this paper presents a method. First,noise of the image is reduced by a filter, which can preserve the edge. Second, the area in a rectangle window is thresholded and the threshold is determined automatically. Finally, the key points are located by a moving little window slider, and then the direction and width of thighbone body are worked out. The experimental results show that this approach is simple, fast and effective.

  2. FEATURE EXTRACTION OF BONES AND SKIN BASED ON ULTRASONIC SCANNING

    Institute of Scientific and Technical Information of China (English)

    Zheng Shuxian; Zhao Wanhua; Lu Bingheng; Zhao Zhao

    2005-01-01

    In the prosthetic socket design, aimed at the high cost and radiation deficiency caused by CT scanning which is a routine technique to obtain the cross-sectional image of the residual limb, a new ultrasonic scanning method is developed to acquire the bones and skin contours of the residual limb. Using a pig fore-leg as the scanning object, an overlapping algorithm is designed to reconstruct the 2D cross-sectional image, the contours of the bone and skin are extracted using edge detection algorithm and the 3D model of the pig fore-leg is reconstructed by using reverse engineering technology. The results of checking the accuracy of the image by scanning a cylinder work pieces show that the extracted contours of the cylinder are quite close to the standard circumference. So it is feasible to get the contours of bones and skin by ultrasonic scanning. The ultrasonic scanning system featuring no radiation and low cost is a kind of new means of cross section scanning for medical images.

  3. Content Based Image Recognition by Information Fusion with Multiview Features

    Directory of Open Access Journals (Sweden)

    Rik Das

    2015-09-01

    Full Text Available Substantial research interest has been observed in the field of object recognition as a vital component for modern intelligent systems. Content based image classification and retrieval have been considered as two popular techniques for identifying the object of interest. Feature extraction has played the pivotal role towards successful implementation of the aforesaid techniques. The paper has presented two novel techniques of feature extraction from diverse image categories both in spatial domain and in frequency domain. The multi view features from the image categories were evaluated for classification and retrieval performances by means of a fusion based recognition architecture. The experimentation was carried out with four different popular public datasets. The proposed fusion framework has exhibited an average increase of 24.71% and 20.78% in precision rates for classification and retrieval respectively, when compared to state-of-the art techniques. The experimental findings were validated with a paired t test for statistical significance.

  4. Finding curvilinear features in speckled images

    Science.gov (United States)

    Samadani, Ramin; Vesecky, John F.

    1990-01-01

    A method for finding curves in digital images with speckle noise is described. The solution method differs from standard linear convolutions followed by thresholds in that it explicitly allows curvature in the features. Maximum a posteriori (MAP) estimation is used, together with statistical models for the speckle noise and for the curve-generation process, to find the most probable estimate of the feature, given the image data. The estimation process is first described in general terms. Then, incorporation of the specific neighborhood system and a multiplicative noise model for speckle allows derivation of the solution, using dynamic programming, of the estimation problem. The detection of curvilinear features is considered separately. The detection results allow the determination of the minimal size of detectable feature. Finally, the estimation of linear features, followed by a detection step, is shown for computer-simulated images and for a SAR image of sea ice.

  5. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  6. Featured Image: Identifying Weird Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2017-08-01

    Hoags Object, an example of a ring galaxy. [NASA/Hubble Heritage Team/Ray A. Lucas (STScI/AURA)]The above image (click for the full view) shows PanSTARRSobservationsof some of the 185 galaxies identified in a recent study as ring galaxies bizarre and rare irregular galaxies that exhibit stars and gas in a ring around a central nucleus. Ring galaxies could be formed in a number of ways; one theory is that some might form in a galaxy collision when a smaller galaxy punches through the center of a larger one, triggering star formation around the center. In a recent study, Ian Timmis and Lior Shamir of Lawrence Technological University in Michigan explore ways that we may be able to identify ring galaxies in the overwhelming number of images expected from large upcoming surveys. They develop a computer analysis method that automatically finds ring galaxy candidates based on their visual appearance, and they test their approach on the 3 million galaxy images from the first PanSTARRS data release. To see more of the remarkable galaxies the authors found and to learn more about their identification method, check out the paper below.CitationIan Timmis and Lior Shamir 2017 ApJS 231 2. doi:10.3847/1538-4365/aa78a3

  7. Reaction Decoder Tool (RDT): extracting features from chemical reactions

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W.; Holliday, Gemma L.; Steinbeck, Christoph; Thornton, Janet M.

    2016-01-01

    Summary: Extracting chemical features like Atom–Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. Availability and implementation: This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder Contact: asad@ebi.ac.uk or s9asad@gmail.com PMID:27153692

  8. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  9. Quantitative imaging features: extension of the oncology medical image database

    Science.gov (United States)

    Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.

    2015-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.

  10. An Advanced Approach to Extraction of Colour Texture Features Based on GLCM

    OpenAIRE

    Miroslav Benco; Robert Hudec; Patrik Kamencay; Martina Zachariasova; Slavomir Matuska

    2014-01-01

    This paper discusses research in the area of texture image classification. More specifically, the combination of texture and colour features is researched. The principle objective is to create a robust descriptor for the extraction of colour texture features. The principles of two well-known methods for grey- level texture feature extraction, namely GLCM (grey- level co-occurrence matrix) and Gabor filters, are used in experiments. For the texture classification, the support vector machine is...

  11. HEURISTICAL FEATURE EXTRACTION FROM LIDAR DATA AND THEIR VISUALIZATION

    OpenAIRE

    Ghosh., S; B. Lohani

    2012-01-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clu...

  12. Dermoscopy analysis of RGB-images based on comparative features

    Science.gov (United States)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Artemyev, Dmitry N.; Neretin, Evgeny Y.; Kozlov, Sergey V.

    2015-09-01

    In this paper, we propose an algorithm for color and texture analysis for dermoscopic images of human skin based on Haar wavelets, Local Binary Patterns (LBP) and Histogram Analysis. This approach is a modification of «7-point checklist» clinical method. Thus, that is an "absolute" diagnostic method because one is using only features extracted from tumor's ROI (Region of Interest), which can be selected manually and/or using a special algorithm. We propose additional features extracted from the same image for comparative analysis of tumor and healthy skin. We used Euclidean distance, Cosine similarity, and Tanimoto coefficient as comparison metrics between color and texture features extracted from tumor's and healthy skin's ROI separately. A classifier for separating melanoma images from other tumors has been built by SVM (Support Vector Machine) algorithm. Classification's errors with and without comparative features between skin and tumor have been analyzed. Significant increase of recognition quality with comparative features has been demonstrated. Moreover, we analyzed two modes (manual and automatic) for ROI selecting on tumor and healthy skin areas. We have reached 91% of sensitivity using comparative features in contrast with 77% of sensitivity using the only "absolute" method. The specificity was the invariable (94%) in both cases.

  13. A multi-approach feature extractions for iris recognition

    Science.gov (United States)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  14. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  15. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  16. Extraction and labeling high-resolution images from PDF documents

    Science.gov (United States)

    Chachra, Suchet K.; Xue, Zhiyun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Accuracy of content-based image retrieval is affected by image resolution among other factors. Higher resolution images enable extraction of image features that more accurately represent the image content. In order to improve the relevance of search results for our biomedical image search engine, Open-I, we have developed techniques to extract and label high-resolution versions of figures from biomedical articles supplied in the PDF format. Open-I uses the open-access subset of biomedical articles from the PubMed Central repository hosted by the National Library of Medicine. Articles are available in XML and in publisher supplied PDF formats. As these PDF documents contain little or no meta-data to identify the embedded images, the task includes labeling images according to their figure number in the article after they have been successfully extracted. For this purpose we use the labeled small size images provided with the XML web version of the article. This paper describes the image extraction process and two alternative approaches to perform image labeling that measure the similarity between two images based upon the image intensity projection on the coordinate axes and similarity based upon the normalized cross-correlation between the intensities of two images. Using image identification based on image intensity projection, we were able to achieve a precision of 92.84% and a recall of 82.18% in labeling of the extracted images.

  17. 利用SAR影像区域分割方法提取海洋暗斑特征%Feature Extraction of Dark Spot Based on the SAR Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    赵泉华; 王玉; 李玉

    2016-01-01

    Marine oil spills from operational discharges and ship accidents always have calamitous impacts on the marine environment and ecosystems, even with small oil coverage volumes. Remote sensing solutions us-ing space-borne or airborne sensors are playing an increasingly important role in monitoring, tracking and mea-suring oil spills and are receiving much more attention from governments and organizations around the world. Compared to airborne sensors, satellite sensors, with their large extent observation, timely data available and all weather operation, have been proven to be more suitable for monitoring oil spills in marine environments, whilst the latter can be easily used to identify polluters and oil spill types but are of limited use due to costs and weather conditions. Currently, the commonly used satellite SAR sensors for this purpose include RADAR-SAT-1/2, ENVISAT, ERS-1/2, and so on. The detectability of oil spills by SAR images is based on the fact that oil slicks dampen the Bragg waves on the ocean surface and reduce the radar backscatter coefficient. Unfortu-nately, many other physical phenomena, for example, low-wind areas, wind-shadow areas near coasts, rain cells, currents, upswelling zones, biogenic films, internal waves, and oceanic or atmospheric fronts, can also generate dark areas, known as look-alikes, in SAR intensity images. Another factor which influences the back-scatter level and the visibility of oil slicks on the sea surface is the wind level. Oil slicks are visible only for a limited range of wind speeds. Generally speaking, SAR based oil spill recognition includes three stages:dark spot detection, dark spot feature extraction and oil spill classification. The work in this article focuses on the feature extraction of detected dark spots. The task at this stage involves defining and acquiring the features ex-isting in SAR intensity images, which can be efficiently used in the classification stage to distinguish oil spills from look

  18. Segmentation of MR images using multiple-feature vectors

    Science.gov (United States)

    Cole, Orlean I. B.; Daemi, Mohammad F.

    1996-04-01

    Segmentation is an important step in the analysis of MR images (MRI). Considerable progress has been made in this area, and numerous reports on 3D segmentation, volume measurement and visualization have been published in recent years. The main purpose of our study is to investigate the power and use of fractal techniques in extraction of features from MR images of the human brain. These features which are supplemented by other features are used for segmentation, and ultimately for the extraction of a known pathology, in our case multiple- sclerosis (MS) lesions. We are particularly interested in the progress of the lesions and occurrence of new lesions which in a typical case are scattered within the image and are sometimes difficult to identify visually. We propose a technique for multi-channel segmentation of MR images using multiple feature vectors. The channels are proton density, T1-weighted and T2-weighted images containing multiple-sclerosis (MS) lesions at various stages of development. We first represent each image as a set of feature vectors which are estimated using fractal techniques, and supplemented by micro-texture features and features from the gray-level co-occurrence matrix (GLCM). These feature vectors are then used in a feature selection algorithm to reduce the dimension of the feature space. The next stage is segmentation and clustering. The selected feature vectors now form the input to the segmentation and clustering routines and are used as the initial clustering parameters. For this purpose, we have used the classical K-means as the initial clustering method. The clustered image is then passed into a probabilistic classifier to further classify and validate each region, taking into account the spatial properties of the image. Initially, segmentation results were obtained using the fractal dimension features alone. Subsequently, a combination of the fractal dimension features and the supplementary features mentioned above were also obtained

  19. Micromotion feature extraction of radar target using tracking pulses with adaptive pulse repetition frequency adjustment

    Science.gov (United States)

    Chen, Yijun; Zhang, Qun; Ma, Changzheng; Luo, Ying; Yeo, Tat Soon

    2014-01-01

    In multifunction phased array radar systems, different activities (e.g., tracking, searching, imaging, feature extraction, recognition, etc.) would need to be performed simultaneously. To relieve the conflict of the radar resource distribution, a micromotion feature extraction method using tracking pulses with adaptive pulse repetition frequencies (PRFs) is proposed in this paper. In this method, the idea of a varying PRF is utilized to solve the frequency-domain aliasing problem of the micro-Doppler signal. With appropriate atom set construction, the micromotion feature can be extracted and the image of the target can be obtained based on the Orthogonal Matching Pursuit algorithm. In our algorithm, the micromotion feature of a radar target is extracted from the tracking pulses and the quality of the constructed image is fed back into the radar system to adaptively adjust the PRF of the tracking pulses. Finally, simulation results illustrate the effectiveness of the proposed method.

  20. 影像辅助LiDAR点云的地物提取方法研究%The Image-assisted Extraction of Geo-spatial Features from LiDAR Point Clouds

    Institute of Scientific and Technical Information of China (English)

    王建强; 徐招星; 谭金石

    2015-01-01

    Accurately capturing geo-spatial features from LiDAR Point clouds is a key work in the process of LiDAR data processing. In view of the deficiency of LiDAR point cloud data classification in filtering and accurate capturing geo-spatial features,this paper presents a method for image-assisted capturing of geo-spatial features from LiDAR point clouds. First,according to the elevation value of LiDAR point clouds, resample and generate the elevation projection image,and automatically segment it. In the meanwhile,capture the features from the high resolution DOM. Then,conduct integrated analysis of the images of the features and segmented images,and process the wrong,leakage,precise classification of the LiDAR point clouds using the result of image extraction. The feasibility and effectiveness of the method are verified with experiments.%从LiDAR点云中准确地提取地物是LiDAR数据处理过程中的一项关键工作。针对LiDAR点云数据在滤波分类和点云精确提取分类方面的不足,提出了一种新的影像辅助下LiDAR点云地物提取的方法。首先,对LiDAR点云按高程值重采样生成高程投影影像并自动分割,同时从高分辨率正射影像提取特征影像,然后将特征影像和由LiDAR生成的分割影像进行集成分析,利用影像提取结果辅助LiDAR点云错分、漏分纠正和点云精确分类处理,最后通过试验分析,验证了该方法的可行性与有效性。

  1. Featured Image: Modeling Supernova Remnants

    Science.gov (United States)

    Kohler, Susanna

    2016-05-01

    This image shows a computer simulation of the hydrodynamics within a supernova remnant. The mixing between the outer layers (where color represents the log of density) is caused by turbulence from the Rayleigh-Taylor instability, an effect that arises when the expanding core gas of the supernova is accelerated into denser shell gas. The past standard for supernova-evolution simulations was to perform them in one dimension and then, in post-processing, manually smooth out regions that undergo Rayleigh-Taylor turbulence (an intrinsically multidimensional effect). But in a recent study, Paul Duffell (University of California, Berkeley) has explored how a 1D model could be used to reproduce the multidimensional dynamics that occur in turbulence from this instability. For more information, check out the paper below!CitationPaul C. Duffell 2016 ApJ 821 76. doi:10.3847/0004-637X/821/2/76

  2. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  3. MRI and PET Image Fusion Using Fuzzy Logic and Image Local Features

    Directory of Open Access Journals (Sweden)

    Umer Javed

    2014-01-01

    to maximally combine useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results show that the proposed scheme produces significantly better results compared to state-of-art schemes.

  4. Imaging features of ciliated hepatic foregut cyst

    Institute of Scientific and Technical Information of China (English)

    Song-Hua Fang; Dan-Jun Dong; Shi-Zheng Zhang

    2005-01-01

    Ciliated hepatic foregut cyst (CHFC) is a very rare cystic lesion of the liver that is histologically similar to bronchogenic cyst. We report one case of CHFC that was hard to distinguish from solid-cystic neoplasm in imaging features. Magnetic resonance imaging was helpful in differentiating these cysts from other lesions.

  5. Perceptual image hashing via feature points: performance evaluation and tradeoffs.

    Science.gov (United States)

    Monga, Vishal; Evans, Brian L

    2006-11-01

    We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification.

  6. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  7. Age Estimation Based on AAM and 2D-DCT Features of Facial Images

    Directory of Open Access Journals (Sweden)

    Asuman Günay

    2015-02-01

    Full Text Available This paper proposes a novel age estimation method - Global and Local feAture based Age estiMation (GLAAM - relying on global and local features of facial images. Global features are obtained with Active Appearance Models (AAM. Local features are extracted with regional 2D-DCT (2- dimensional Discrete Cosine Transform of normalized facial images. GLAAM consists of the following modules: face normalization, global feature extraction with AAM, local feature extraction with 2D-DCT, dimensionality reduction by means of Principal Component Analysis (PCA and age estimation with multiple linear regression. Experiments have shown that GLAAM outperforms many methods previously applied to the FG-NET database.

  8. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  9. MR imaging features of craniodiaphyseal dysplasia

    Energy Technology Data Exchange (ETDEWEB)

    Marden, Franklin A. [Mallinckrodt Institute of Radiology, Washington University Medical Center, 510 South Kingshighway Blvd., MO 63110, St. Louis (United States); Department of Radiology, St. Louis Children' s Hospital, Children' s Place, MO 63110, St. Louis (United States); Wippold, Franz J. [Mallinckrodt Institute of Radiology, Washington University Medical Center, 510 South Kingshighway Blvd., MO 63110, St. Louis (United States); Department of Radiology, St. Louis Children' s Hospital, Children' s Place, MO 63110, St. Louis (United States); Department of Radiology/Nuclear Medicine, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, MD 20814, Bethesda (United States)

    2004-02-01

    We report the magnetic resonance (MR) imaging findings in a 4-year-old girl with characteristic radiographic and computed tomography (CT) features of craniodiaphyseal dysplasia. MR imaging exquisitely depicted cranial nerve compression, small foramen magnum, hydrocephalus, and other intracranial complications of this syndrome. A syrinx of the cervical spinal cord was demonstrated. We suggest that MR imaging become a routine component of the evaluation of these patients. (orig.)

  10. Hdr Imaging for Feature Detection on Detailed Architectural Scenes

    Science.gov (United States)

    Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.

    2015-02-01

    3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  11. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  12. Handwritten Character Classification using the Hotspot Feature Extraction Technique

    NARCIS (Netherlands)

    Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2012-01-01

    Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it

  13. 自适应融合目标和背景的图像特征提取方法%An Image Feature Extraction Method Based on Adaptive Fusion of Object and Background

    Institute of Scientific and Technical Information of China (English)

    于来行; 冯林; 张晶; 刘胜蓝

    2016-01-01

    针对现有基于结构元描述的图像特征提取算法缺少连续像素或结构元的相关性描述,对图像特征的区分能力不足的问题。通过定义新的结构元和自适应向量融合模型,并引入连通粒概念,提出一种加权量化方法对图像目标和背景进行自适应融合。首先根据视觉选择特性定义9种新的结构元,并且构建了连通粒属性及分层统计模型;然后通过颜色转换和结构元匹配生成相应的映射子图,从中提取统计结构元和连通性特征向量;最后利用自适应向量融合模型把各分量合并为一组特征向量用于图像检索。在3个 Corel 数据集上的实验结果表明,与其他算法相比,文中方法性能更稳定,能达到更高的检索精度;该方法既能描述图像的全局特征,又能反映图像的局部细节信息。%Existing algorithms based on structural descriptor are not accurate enough to discriminate the image features, because they lack the correlation description of continuous pixels or structural elements. To address this problem, this paper presents a novel weighted quantization method, which can adaptively integrate images object features and background features into one image histogram. The proposed method includes the new structure elements definition, adaptive vector fusion model and connected granule concept. Firstly, based on the visual se-lection characteristics, nine kinds of new structure elements are defined. The connected granule' attributes are given and the hierarchical statistical model is constructed. Secondly, the corresponding mapping sub-graphs are generated by color transformation and structure elements matching. Meanwhile, the feature vectors of statistical structure elements and connectivity are extracted. Finally, a set of feature vectors are obtained by utilizing adap-tive vector fusion model for image retrieval. Extensive experiments on three Corel-datasets demonstrate that

  14. Analytical Study of Feature Extraction Techniques in Opinion Mining

    Directory of Open Access Journals (Sweden)

    Pravesh Kumar Singh

    2013-07-01

    Full Text Available Although opinion mining is in a nascent stage of de velopment but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines etc. This paper is an attempt to appraise the vario us techniques of feature extraction. The first part discusses various techniques and second part m akes a detailed appraisal of the major techniques used for feature extraction.

  15. Solving jigsaw puzzles using image features

    DEFF Research Database (Denmark)

    Nielsen, Ture R.; Drewsen, Peter; Hansen, Klaus

    2008-01-01

    algorithm which exploits the divide and conquer paradigm to reduce the combinatorially complex problem by classifying the puzzle pieces and comparing pieces drawn from the same group. The paper includes a brief preliminary investigation of some image features used in the classification.......In this article, we describe a method for automatic solving of the jigsaw puzzle problem based on using image features instead of the shape of the pieces. The image features are used for obtaining an accurate measure for edge similarity to be used in a new edge matching algorithm. The algorithm...... is used in a general puzzle solving method which is based on a greedy algorithm previously proved successful. We have been able to solve computer generated puzzles of 320 pieces as well as a real puzzle of 54 pieces by exclusively using image information. Additionally, we investigate a new scalable...

  16. Exact feature probabilities in images with occlusion

    CERN Document Server

    Pitkow, Xaq

    2010-01-01

    To understand the computations of our visual system, it is important to understand also the natural environment it evolved to interpret. Unfortunately, existing models of the visual environment are either unrealistic or too complex for mathematical description. Here we describe a naturalistic image model and present a mathematical solution for the statistical relationships between the image features and model variables. The world described by this model is composed of independent, opaque, textured objects which occlude each other. This simple structure allows us to calculate the joint probability distribution of image values sampled at multiple arbitrarily located points, without approximation. This result can be converted into probabilistic relationships between observable image features as well as between the unobservable properties that caused these features, including object boundaries and relative depth. Using these results we explain the causes of a wide range of natural scene properties, including high...

  17. Efficient sparse kernel feature extraction based on partial least squares.

    Science.gov (United States)

    Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John

    2009-08-01

    The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.

  18. Evaluation of textural features for multispectral images

    Science.gov (United States)

    Bayram, Ulya; Can, Gulcan; Duzgun, Sebnem; Yalabik, Nese

    2011-11-01

    Remote sensing is a field that has wide use, leading to the fact that it has a great importance. Therefore performance of selected features plays a great role. In order to gain some perspective on useful textural features, we have brought together state-of-art textural features in recent literature, yet to be applied in remote sensing field, as well as presenting a comparison with traditional ones. Therefore we selected most commonly used textural features in remote sensing that are grey-level co-occurrence matrix (GLCM) and Gabor features. Other selected features are local binary patterns (LBP), edge orientation features extracted after applying steerable filter, and histogram of oriented gradients (HOG) features. Color histogram feature is also used and compared. Since most of these features are histogram-based, we have compared performance of bin-by-bin comparison with a histogram comparison method named as diffusion distance method. During obtaining performance of each feature, k-nearest neighbor classification method (k-NN) is applied.

  19. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  20. Improved Framework for Breast Cancer Detection using Hybrid Feature Extraction Technique and FFNN

    Directory of Open Access Journals (Sweden)

    Ibrahim Mohamed Jaber Alamin

    2016-10-01

    Full Text Available Breast Cancer early detection using terminologies of image processing is suffered from the less accuracy performance in different automated medical tools. To improve the accuracy, still there are many research studies going on different phases such as segmentation, feature extraction, detection, and classification. The proposed framework is consisting of four main steps such as image preprocessing, image segmentation, feature extraction and finally classification. This paper presenting the hybrid and automated image processing based framework for breast cancer detection. For image preprocessing, both Laplacian and average filtering approach is used for smoothing and noise reduction if any. These operations are performed on 256 x 256 sized gray scale image. Output of preprocessing phase is used at efficient segmentation phase. Algorithm is separately designed for preprocessing step with goal of improving the accuracy. Segmentation method contributed for segmentation is nothing but the improved version of region growing technique. Thus breast image segmentation is done by using proposed modified region growing technique. The modified region growing technique overcoming the limitations of orientation as well as intensity. The next step we proposed is feature extraction, for this framework we have proposed to use combination of different types of features such as texture features, gradient features, 2D-DWT features with higher order statistics (HOS. Such hybrid feature set helps to improve the detection accuracy. For last phase, we proposed to use efficient feed forward neural network (FFNN. The comparative study between existing 2D-DWT feature extraction and proposed HOS-2D-DWT based feature extraction methods is proposed.

  1. FAST DISCRETE CURVELET TRANSFORM BASED ANISOTROPIC FEATURE EXTRACTION FOR IRIS RECOGNITION

    Directory of Open Access Journals (Sweden)

    Amol D. Rahulkar

    2010-11-01

    Full Text Available The feature extraction plays a very important role in iris recognition. Recent researches on multiscale analysis provide good opportunity to extract more accurate information for iris recognition. In this work, a new directional iris texture features based on 2-D Fast Discrete Curvelet Transform (FDCT is proposed. The proposed approach divides the normalized iris image into six sub-images and the curvelet transform is applied independently on each sub-image. The anisotropic feature vector for each sub-image is derived using the directional energies of the curvelet coefficients. These six feature vectors are combined to create the resultant feature vector. During recognition, the nearest neighbor classifier based on Euclidean distance has been used for authentication. The effectiveness of the proposed approach has been tested on two different databases namely UBIRIS and MMU1. Experimental results show the superiority of the proposed approach.

  2. A new Color Feature Extraction method Based on Dynamic Color Distribution Entropy of Neighbourhoods

    Directory of Open Access Journals (Sweden)

    Fatemeh Alamdar

    2011-09-01

    Full Text Available One of the important requirements in image retrieval, indexing, classification, clustering and etc. is extracting efficient features from images. The color feature is one of the most widely used visual features. Use of color histogram is the most common way for representing color feature. One of disadvantage of the color histogram is that it does not take the color spatial distribution into consideration. In this paper dynamic color distribution entropy of neighborhoods method based on color distribution entropy is presented, which effectively describes the spatial information of colors. The image retrieval results in compare to improved color distribution entropy show the acceptable efficiency of this approach.

  3. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  4. Feature statistic analysis of ultrasound images of liver cancer

    Science.gov (United States)

    Huang, Shuqin; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    In this paper, a specific feature analysis of liver ultrasound images including normal liver, liver cancer especially hepatocellular carcinoma (HCC) and other hepatopathy is discussed. According to the classification of hepatocellular carcinoma (HCC), primary carcinoma is divided into four types. 15 features from single gray-level statistic, gray-level co-occurrence matrix (GLCM), and gray-level run-length matrix (GLRLM) are extracted. Experiments for the discrimination of each type of HCC, normal liver, fatty liver, angioma and hepatic abscess have been conducted. Corresponding features to potentially discriminate them are found.

  5. Scene classification of infrared images based on texture feature

    Science.gov (United States)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    Scene Classification refers to as assigning a physical scene into one of a set of predefined categories. Utilizing the method texture feature is good for providing the approach to classify scenes. Texture can be considered to be repeating patterns of local variation of pixel intensities. And texture analysis is important in many applications of computer image analysis for classification or segmentation of images based on local spatial variations of intensity. Texture describes the structural information of images, so it provides another data to classify comparing to the spectrum. Now, infrared thermal imagers are used in different kinds of fields. Since infrared images of the objects reflect their own thermal radiation, there are some shortcomings of infrared images: the poor contrast between the objectives and background, the effects of blurs edges, much noise and so on. Because of these shortcomings, it is difficult to extract to the texture feature of infrared images. In this paper we have developed an infrared image texture feature-based algorithm to classify scenes of infrared images. This paper researches texture extraction using Gabor wavelet transform. The transformation of Gabor has excellent capability in analysis the frequency and direction of the partial district. Gabor wavelets is chosen for its biological relevance and technical properties In the first place, after introducing the Gabor wavelet transform and the texture analysis methods, the infrared images are extracted texture feature by Gabor wavelet transform. It is utilized the multi-scale property of Gabor filter. In the second place, we take multi-dimensional means and standard deviation with different scales and directions as texture parameters. The last stage is classification of scene texture parameters with least squares support vector machine (LS-SVM) algorithm. SVM is based on the principle of structural risk minimization (SRM). Compared with SVM, LS-SVM has overcome the shortcoming of

  6. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确

    2004-01-01

    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  7. Synthetic range profiling, ISAR imaging of sea vessels and feature extraction, using a multimode radar to classify targets: initial results from field trials

    CSIR Research Space (South Africa)

    Abdul Gaffar, MY

    2011-04-01

    Full Text Available -based classification of small to medium sized sea vessels in littoral condition. The experimental multimode radar is based on an experimental tracking radar that was modified to generate SRP and ISAR images in both search and tracking modes. The architecture...

  8. Face recognition with multi-resolution spectral feature images.

    Directory of Open Access Journals (Sweden)

    Zhan-Li Sun

    Full Text Available The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.

  9. Face recognition with multi-resolution spectral feature images.

    Science.gov (United States)

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.

  10. An Open Source Agenda for Research Linking Text and Image Content Features.

    Science.gov (United States)

    Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi

    2001-01-01

    Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…

  11. Feature analysis for detecting people from remotely sensed images

    Science.gov (United States)

    Sirmacek, Beril; Reinartz, Peter

    2013-01-01

    We propose a novel approach using airborne image sequences for detecting dense crowds and individuals. Although airborne images of this resolution range are not enough to see each person in detail, we can still notice a change of color and intensity components of the acquired image in the location where a person exists. Therefore, we propose a local feature detection-based probabilistic framework to detect people automatically. Extracted local features behave as observations of the probability density function (PDF) of the people locations to be estimated. Using an adaptive kernel density estimation method, we estimate the corresponding PDF. First, we use estimated PDF to detect boundaries of dense crowds. After that, using background information of dense crowds and previously extracted local features, we detect other people in noncrowd regions automatically for each image in the sequence. To test our crowd and people detection algorithm, we use airborne images taken over Munich during the Oktoberfest event, two different open-air concerts, and an outdoor festival. In addition, we apply tests on GeoEye-1 satellite images. Our experimental results indicate possible use of the algorithm in real-life mass events.

  12. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  13. Automated Classification of Glaucoma Images by Wavelet Energy Features

    Directory of Open Access Journals (Sweden)

    N.Annu

    2013-04-01

    Full Text Available Glaucoma is the second leading cause of blindness worldwide. As glaucoma progresses, more optic nerve tissue is lost and the optic cup grows which leads to vision loss. This paper compiles a systemthat could be used by non-experts to filtrate cases of patients not affected by the disease. This work proposes glaucomatous image classification using texture features within images and efficient glaucoma classification based on Probabilistic Neural Network (PNN. Energy distribution over wavelet sub bands is applied to compute these texture features. Wavelet features were obtained from the daubechies (db3, symlets (sym3, and biorthogonal (bio3.3, bio3.5, and bio3.7 wavelet filters. It uses a technique to extract energy signatures obtained using 2-D discrete wavelet transform and the energy obtained from the detailed coefficients can be used to distinguish between normal and glaucomatous images. We observedan accuracy of around 95%, this demonstrates the effectiveness of these methods.

  14. Interplay of spatial aggregation and computational geometry in extracting diagnostic features from cardiac activation data.

    Science.gov (United States)

    Ironi, Liliana; Tentoni, Stefania

    2012-09-01

    Functional imaging plays an important role in the assessment of organ functions, as it provides methods to represent the spatial behavior of diagnostically relevant variables within reference anatomical frameworks. The salient physical events that underly a functional image can be unveiled by appropriate feature extraction methods capable to exploit domain-specific knowledge and spatial relations at multiple abstraction levels and scales. In this work we focus on general feature extraction methods that can be applied to cardiac activation maps, a class of functional images that embed spatio-temporal information about the wavefront propagation. The described approach integrates a qualitative spatial reasoning methodology with techniques borrowed from computational geometry to provide a computational framework for the automated extraction of basic features of the activation wavefront kinematics and specific sets of diagnostic features that identify an important class of rhythm pathologies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Feature preserving compression of high resolution SAR images

    Science.gov (United States)

    Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing

    2006-10-01

    Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.

  16. Histopathological Image Classification Using Discriminative Feature-Oriented Dictionary Learning.

    Science.gov (United States)

    Vu, Tiep Huu; Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, U K Arvind

    2016-03-01

    In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available.

  17. 运用SI-Harris算子提取遥感图像点特征%Point Feature Extraction of Remote Sensing Image Using SI-Harris

    Institute of Scientific and Technical Information of China (English)

    孟伟灿; 朱述龙; 曹闻; 杨海鹏; 刘岩

    2014-01-01

    Scale Invariant Harris SI-Harris was applied to detecting interest points on remotely-sensed images and its rigorous formulas were derived. Then the focus of the paper was put on the experimental comparisons between the results obtained by Harris and SI-Harris on different resolution remote sensing images. The repeatability rate was used to quantitatively evaluate the experiment results. The experiments indicate that the repeatability rate of SI-Harris was obvious higher than Harris s. SI-Harris can be used to detect interest points on different resolution remote sensing images and then can serve the matching of different resolution images.%推导了SI-Harris算子(尺度不变Harris算子)的理论公式,并将其应用于遥感图像的点特征提取。重点对Harris算子与SI-Harris算子在不同空间分辨率遥感图像上的点特征提取进行实验比较,利用“重复率”指标对实验结果进行定量评价。实验表明:SI-Harris算子“重复率”较Harris算子有明显提升,可用于不同空间分辨率遥感图像的点特征提取,进而可服务于不同空间分辨率遥感图像的匹配。

  18. Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

    Directory of Open Access Journals (Sweden)

    Xian-Hua Han

    2011-01-01

    extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  19. Sparse Representation and Dictionary Learning as Feature Extraction in Vessel Imagery

    Science.gov (United States)

    2014-12-01

    TECHNICAL REPORT 2070 December 2014 Sparse Representation and Dictionary Learning as Feature Extraction in Vessel Imagery...2 2.1.1 Dictionary Learning...8]. The descriptors are then clustered and pooled with respect to a dictionary of vocabulary features obtained from training imagery. The image is

  20. Extracting Conceptual Feature Structures from Text

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Lassen, Tine;

    2011-01-01

    This paper describes an approach to indexing texts by their conceptual content using ontologies along with lexico-syntactic information and semantic role assignment provided by lexical resources. The conceptual content of meaningful chunks of text is transformed into conceptual feature structures...

  1. Two-level hierarchical feature learning for image classification

    Institute of Scientific and Technical Information of China (English)

    Guang-hui SONG; Xiao-gang JIN; Gen-lang CHEN; Yan NIE

    2016-01-01

    In some image classifi cation tasks, similarities among different categories are different and the samples are usually misclassifi ed as highly similar categories. To distinguish highly similar categories, more specifi c features are required so that the classifi er can improve the classifi cation performance. In this paper, we propose a novel two-level hierarchical feature learning framework based on the deep convolutional neural network (CNN), which is simple and effective. First, the deep feature extractors of different levels are trained using the transfer learning method that fi ne-tunes the pre-trained deep CNN model toward the new target dataset. Second, the general feature extracted from all the categories and the specifi c feature extracted from highly similar categories are fused into a feature vector. Then the fi nal feature representation is fed into a linear classifi er. Finally, experiments using the Caltech-256, Oxford Flower-102, and Tasmania Coral Point Count (CPC) datasets demonstrate that the expression ability of the deep features resulting from two-level hierarchical feature learning is powerful. Our proposed method effectively increases the classifi cation accuracy in comparison with fl at multiple classifi cation methods.

  2. A Method of SAR Target Recognition Based on Gabor Filter and Local Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Wang Lu

    2015-12-01

    Full Text Available This paper presents a novel texture feature extraction method based on a Gabor filter and Three-Patch Local Binary Patterns (TPLBP for Synthetic Aperture Rader (SAR target recognition. First, SAR images are processed by a Gabor filter in different directions to enhance the significant features of the targets and their shadows. Then, the effective local texture features based on the Gabor filtered images are extracted by TPLBP. This not only overcomes the shortcoming of Local Binary Patterns (LBP, which cannot describe texture features for large scale neighborhoods, but also maintains the rotation invariant characteristic which alleviates the impact of the direction variations of SAR targets on recognition performance. Finally, we use an Extreme Learning Machine (ELM classifier and extract the texture features. The experimental results of MSTAR database demonstrate the effectiveness of the proposed method.

  3. Imaging features of alveolar soft part sarcoma

    Institute of Scientific and Technical Information of China (English)

    Teng Jin; Ping Zhang Co-first author; Xiaoming Li

    2015-01-01

    Objective The aim of this study was to analyze the imaging features of alveolar soft part sarcoma (ASPS). Methods The imaging features of 11 cases with ASPS were retrospectively analyzed. Results ASPS mainly exhibited an isointense or slightly high signal intensity on T1-weighted imaging (T1WI), and a mixed high signal on T2-weighted imaging (T2WI). ASPS was partial, with rich tortuous flow voids, or “line-like” low signal septa. The essence of the mass was heterogeneous enhancement. The 1H-MRS showed a slight choline peak at 3.2 ppm. Conclusion The wel-circumscribed mass and blood voids, combined with “line-like” low signals play a significant role in diagnosis. The choline peak and the other signs may be auxiliary diagnoses.

  4. [RVM supervised feature extraction and Seyfert spectra classification].

    Science.gov (United States)

    Li, Xiang-Ru; Hu, Zhan-Yi; Zhao, Yong-Heng; Li, Xiao-Ming

    2009-06-01

    With recent technological advances in wide field survey astronomy and implementation of several large-scale astronomical survey proposals (e. g. SDSS, 2dF and LAMOST), celestial spectra are becoming very abundant and rich. Therefore, research on automated classification methods based on celestial spectra has been attracting more and more attention in recent years. Feature extraction is a fundamental problem in automated spectral classification, which not only influences the difficulty and complexity of the problem, but also determines the performance of the designed classifying system. The available methods of feature extraction for spectra classification are usually unsupervised, e. g. principal components analysis (PCA), wavelet transform (WT), artificial neural networks (ANN) and Rough Set theory. These methods extract features not by their capability to classify spectra, but by some kind of power to approximate the original celestial spectra. Therefore, the extracted features by these methods usually are not the best ones for classification. In the present work, the authors pointed out the necessary to investigate supervised feature extraction by analyzing the characteristics of the spectra classification research in available literature and the limitations of unsupervised feature extracting methods. And the authors also studied supervised feature extracting based on relevance vector machine (RVM) and its application in Seyfert spectra classification. RVM is a recently introduced method based on Bayesian methodology, automatic relevance determination (ARD), regularization technique and hierarchical priors structure. By this method, the authors can easily fuse the information in training data, the authors' prior knowledge and belief in the problem, etc. And RVM could effectively extract the features and reduce the data based on classifying capability. Extensive experiments show its superior performance in dimensional reduction and feature extraction for Seyfert

  5. Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language.

    Science.gov (United States)

    Shanableh, Tamer; Assaleh, Khaled; Al-Rousan, M

    2007-06-01

    This paper presents various spatio-temporal feature-extraction techniques with applications to online and offline recognitions of isolated Arabic Sign Language gestures. The temporal features of a video-based gesture are extracted through forward, backward, and bidirectional predictions. The prediction errors are thresholded and accumulated into one image that represents the motion of the sequence. The motion representation is then followed by spatial-domain feature extractions. As such, the temporal dependencies are eliminated and the whole video sequence is represented by a few coefficients. The linear separability of the extracted features is assessed, and its suitability for both parametric and nonparametric classification techniques is elaborated upon. The proposed feature-extraction scheme was complemented by simple classification techniques, namely, K nearest neighbor (KNN) and Bayesian, i.e., likelihood ratio, classifiers. Experimental results showed classification performance ranging from 97% to 100% recognition rates. To validate our proposed technique, we have conducted a series of experiments using the classical way of classifying data with temporal dependencies, namely, hidden Markov models (HMMs). Experimental results revealed that the proposed feature-extraction scheme combined with simple KNN or Bayesian classification yields comparable results to the classical HMM-based scheme. Moreover, since the proposed scheme compresses the motion information of an image sequence into a single image, it allows for using simple classification techniques where the temporal dimension is eliminated. This is actually advantageous for both computational and storage requirements of the classifier.

  6. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  7. Heuristical Feature Extraction from LIDAR Data and Their Visualization

    Science.gov (United States)

    Ghosh, S.; Lohani, B.

    2011-09-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clusters and simplify them to be sent to a visualization engine.

  8. Spoken Language Identification Using Hybrid Feature Extraction Methods

    CERN Document Server

    Kumar, Pawan; Mishra, A N; Chandra, Mahesh

    2010-01-01

    This paper introduces and motivates the use of hybrid robust feature extraction technique for spoken language identification (LID) system. The speech recognizers use a parametric form of a signal to get the most important distinguishable features of speech signal for recognition task. In this paper Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP) along with two hybrid features are used for language Identification. Two hybrid features, Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were obtained from combination of MFCC and PLP. Two different classifiers, Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) were used for classification. The experiment shows better identification rate using hybrid feature extraction techniques compared to conventional feature extraction methods.BFCC has shown better performance than MFCC with both classifiers. RPLP along with GMM has shown be...

  9. SOFT COMPUTING BASED MEDICAL IMAGE RETRIEVAL USING SHAPE AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    M. Mary Helta Daisy

    2014-01-01

    Full Text Available Image retrieval is a challenging and important research applications like digital libraries and medical image databases. Content-based image retrieval is useful in retrieving images from database based on the feature vector generated with the help of the image features. In this study, we present image retrieval based on the genetic algorithm. The shape feature and morphological based texture features are extracted images in the database and query image. Then generating chromosome based on the distance value obtained by the difference feature vector of images in the data base and the query image. In the selected chromosome the genetic operators like cross over and mutation are applied. After that the best chromosome selected and displays the most similar images to the query image. The retrieval performance of the method shows better retrieval result.

  10. D Feature Point Extraction from LIDAR Data Using a Neural Network

    Science.gov (United States)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  11. Imaging features of benign adrenal cysts

    Energy Technology Data Exchange (ETDEWEB)

    Sanal, Hatice Tuba [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey)]. E-mail: tubasanal@yahoo.com; Kocaoglu, Murat [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey); Yildirim, Duzgun [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey); Bulakbasi, Nail [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey); Guvenc, Inanc [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey); Tayfun, Cem [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey); Ucoz, Taner [Department of Radiology, Gulhane Military Medical Academy, Ankara (Turkey)

    2006-12-15

    Benign adrenal gland cysts (BACs) are rare lesions with a variable histological spectrum and may mimic not only each other but also malignant ones. We aimed to review imaging features of BACs which can be helpful in distinguishing each entity and determining the subsequent appropriate management.

  12. Disorders of cortical formation: MR imaging features.

    Science.gov (United States)

    Abdel Razek, A A K; Kandell, A Y; Elsorogy, L G; Elmongy, A; Basett, A A

    2009-01-01

    The purpose of this article was to review the embryologic stages of the cerebral cortex, illustrate the classification of disorders of cortical formation, and finally describe the main MR imaging features of these disorders. Disorders of cortical formation are classified according to the embryologic stage of the cerebral cortex at which the abnormality occurred. MR imaging shows diminished cortical thickness and sulcation in microcephaly, enlarged dysplastic cortex in hemimegalencephaly, and ipsilateral focal cortical thickening with radial hyperintense bands in focal cortical dysplasia. MR imaging detects smooth brain in classic lissencephaly, the nodular cortex with cobblestone cortex with congenital muscular dystrophy, and the ectopic position of the gray matter with heterotopias. MR imaging can detect polymicrogyria and related syndromes as well as the types of schizencephaly. We concluded that MR imaging is essential to demonstrate the morphology, distribution, and extent of different disorders of cortical formation as well as the associated anomalies and related syndromes.

  13. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  14. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  15. Fingerprint Identification - Feature Extraction, Matching and Database Search

    NARCIS (Netherlands)

    Bazen, Asker Michiel

    2002-01-01

    Presents an overview of state-of-the-art fingerprint recognition technology for identification and verification purposes. Three principal challenges in fingerprint recognition are identified: extracting robust features from low-quality fingerprints, matching elastically deformed fingerprints and eff

  16. Feature Based Image Mosaic Using Steerable Filters and Harris Corner Detector

    Directory of Open Access Journals (Sweden)

    Mahesh

    2013-05-01

    Full Text Available Image mosaic is to be combine several views of a scene in to single wide angle view. This paper proposes the feature based image mosaic approach. The mosaic image system includes feature point detection, feature point descriptor extraction and matching. A RANSAC algorithm is applied to eliminate number of mismatches and obtain transformation matrix between the images. The input image is transformed with the correct mapping model for image stitching and same is estimated. In this paper, feature points are detected using steerable filters and Harris, and compared with traditional Harris, KLT, and FAST corner detectors.

  17. Clinical and imaging features of fludarabine neurotoxicity.

    Science.gov (United States)

    Lee, Michael S; McKinney, Alexander M; Brace, Jeffrey R; Santacruz, Karen

    2010-03-01

    Neurotoxicity from intravenous fludarabine is a rare but recognized clinical entity. Its brain imaging features have not been extensively described. Three patients received 38.5 mg or 40 mg/m per day fludarabine in a 5-day intravenous infusion before bone marrow transplantation in treatment of hematopoietic malignancies. Several weeks later, each patient developed progressive neurologic decline, including retrogeniculate blindness, leading to coma and death. Brain MRI showed progressively enlarging but mild T2/FLAIR hyperintensities in the periventricular white matter. The lesions demonstrated restricted diffusion but did not enhance. Because the neurotoxicity of fludarabine appears long after exposure, neurologic decline in this setting is likely to be attributed to opportunistic disease. However, the imaging features are distinctive in their latency and in being mild relative to the profound clinical features. The safe dose of fludarabine in this context remains controversial.

  18. Semiautomated landscape feature extraction and modeling

    Science.gov (United States)

    Wasilewski, Anthony A.; Faust, Nickolas L.; Ribarsky, William

    2001-08-01

    We have developed a semi-automated procedure for generating correctly located 3D tree objects form overhead imagery. Cross-platform software partitions arbitrarily large, geocorrected and geolocated imagery into management sub- images. The user manually selected tree areas from one or more of these sub-images. Tree group blobs are then narrowed to lines using a special thinning algorithm which retains the topology of the blobs, and also stores the thickness of the parent blob. Maxima along these thinned tree grous are found, and used as individual tree locations within the tree group. Magnitudes of the local maxima are used to scale the radii of the tree objects. Grossly overlapping trees are culled based on a comparison of tree-tree distance to combined radii. Tree color is randomly selected based on the distribution of sample tree pixels, and height is estimated form tree radius. The final tree objects are then inserted into a terrain database which can be navigated by VGIS, a high-resolution global terrain visualization system developed at Georgia Tech.

  19. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  20. Document image retrieval based on multi-density features

    Institute of Scientific and Technical Information of China (English)

    HU Zhilan; LIN Xinggang; YAN Hong

    2007-01-01

    The development of document image databases is becoming a challenge for document image retrieval techniques.Traditional layout-reconstructed-based methods rely on high quality document images as well as an optical character recognition (OCR) precision,and can only deal with several widely used languages.The complexity of document layouts greatly hinders layout analysis-based approaches.This paper describes a multi-density feature based algorithm for binary document images,which is independent of OCR or layout analyses.The text area was extracted after preprocessing such as skew correction and marginal noise removal.Then the aspect ratio and multi-density features were extracted from the text area to select the best candidates from the document image database.Experimental results show that this approach is simple with loss rates less than 3% and can efficiently analyze images with different resolutions and different input systems.The system is also robust to noise due to its notes and complex layouts,etc.

  1. A harmonic linear dynamical system for prominent ECG feature extraction.

    Science.gov (United States)

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  2. A Harmonic Linear Dynamical System for Prominent ECG Feature Extraction

    Directory of Open Access Journals (Sweden)

    Ngoc Anh Nguyen Thi

    2014-01-01

    Full Text Available Unsupervised mining of electrocardiography (ECG time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  3. Multi-modal image registration using structural features.

    Science.gov (United States)

    Kasiri, Keyvan; Clausi, David A; Fieguth, Paul

    2014-01-01

    Multi-modal image registration has been a challenging task in medical images because of the complex intensity relationship between images to be aligned. Registration methods often rely on the statistical intensity relationship between the images which suffers from problems such as statistical insufficiency. The proposed registration method works based on extracting structural features by utilizing the complex phase and gradient-based information. By employing structural relationships between different modalities instead of complex similarity measures, the multi-modal registration problem is converted into a mono-modal one. Therefore, conventional mono-modal similarity measures can be utilized to evaluate the registration results. This new registration paradigm has been tested on magnetic resonance (MR) brain images of different modes. The method has been evaluated based on target registration error (TRE) to determine alignment accuracy. Quantitative results demonstrate that the proposed method is capable of achieving comparable registration accuracy compared to the conventional mutual information.

  4. Feature Extraction by Wavelet Decomposition of Surface

    Directory of Open Access Journals (Sweden)

    Prashant Singh

    2010-07-01

    Full Text Available The paper presents a new approach to surface acoustic wave (SAW chemical sensor array design and data processing for recognition of volatile organic compounds (VOCs based on transient responses. The array is constructed of variable thickness single polymer-coated SAW oscillator sensors. The thickness of polymer coatings are selected such that during the sensing period, different sensors are loaded with varied levels of diffusive inflow of vapour species due to different stages of termination of equilibration process. Using a single polymer for coating the individual sensors with different thickness introduces vapour-specific kinetics variability in transient responses. The transient shapes are analysed by wavelet decomposition based on Daubechies mother wavelets. The set of discrete wavelet transform (DWT approximation coefficients across the array transients is taken to represent the vapour sample in two alternate ways. In one, the sets generated by all the transients are combined into a single set to give a single representation to the vapour. In the other, the set of approximation coefficients at each data point generated by all transients is taken to represent the vapour. The latter results in as many alternate representations as there are approximation coefficients. The alternate representations of a vapour sample are treated as different instances or realisations for further processing. The wavelet analysis is then followed by the principal component analysis (PCA to create new feature space. A comparative analysis of the feature spaces created by both the methods leads to the conclusion that both methods yield complimentary information: the one reveals intrinsic data variables, and the other enhances class separability. The present approach is validated by generating synthetic transient response data based on a prototype polyisobutylene (PIB coated 3-element SAW sensor array exposed to 7 VOC vapours: chloroform, chlorobenzene o

  5. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  6. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition.In this paper,we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images.For flower retrieval,we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets,Centroid-Contour Distance(CCD)and Angle Code Histogram(ACH),to characterize the shape features of a flower contour.Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions.Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest(ROD based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard(1991)and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  7. 新疆地方性肝包虫CT图像的灰度直方图特征提取与分析%Feature Extraction and Analysis on CT Image of Xinjiang Local Liver Hydatid by Using Gray-scale Histograms

    Institute of Scientific and Technical Information of China (English)

    木拉提·哈米提; 周晶晶; 严传波; 李莉; 陈建军; 胡彦婷; 孔德伟

    2012-01-01

    The feature extraction of images is a foundational work for image recognition, image data mining, and content-based image retrieval, and it is also the key issues of pattern recognition and classification. Feature extraction based on gray-scale histograms is a typical algorithm for the medical image feature extraction. For features of liver hydatid CT images that is extracted by using different gray-scale histograms are normalizing scale by uniform quantization, the noise is removed by using a median filter, the contrast is enhanced by limited adaptive histogram equalization; and then the gray-scale histograms is used to get the features of the image. The main features of the image classification are obtained by using statistical and maximum classification distance analysis on the histogram features, and then the classification ability of features is evaluated by discriminant analysis. The result shows that there is a certain discrepancy of statistical analysis for the features extracted by gray -scale histograms; features selected by maximum classification distance enhance the accuracy of image classification. This study would lay a solid foundation for the content-based medical image retrieval and the computer-aided diagnosis system to a certain extent.%图像特征提取是图像识别、图像数据挖掘、基于内容的图像检索等工作的基础,是模式识别和分类中的关键问题.本文运用灰度直方图法提取新疆地方性肝包虫CT图像特征,对图像进行尺寸归一、去噪和增强的预处理,并对灰度直方图特征进行统计分析,用最大类间距法获取图像分类的主要特征,同时使用判别分析法对特征的分类能力进行评价.结果表明,灰度直方图法提取的特征在统计分析中存在差异,且提高图像分类的准确率,一定程度上有助于对肝包虫病CT图像进行分类和检索.

  8. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  9. A review of road extraction from remote sensing images

    Directory of Open Access Journals (Sweden)

    Weixing Wang

    2016-06-01

    Full Text Available As a significant role for traffic management, city planning, road monitoring, GPS navigation and map updating, the technology of road extraction from a remote sensing (RS image has been a hot research topic in recent years. In this paper, after analyzing different road features and road models, the road extraction methods were classified into the classification-based methods, knowledge-based methods, mathematical morphology, active contour model, and dynamic programming. Firstly, the road features, road model, existing difficulties and interference factors for road extraction were analyzed. Secondly, the principle of road extraction, the advantages and disadvantages of various methods and research achievements were briefly highlighted. Then, the comparisons of the different road extraction algorithms were performed, including road features, test samples and shortcomings. Finally, the research results in recent years were summarized emphatically. It is obvious that only using one kind of road features is hard to get an excellent extraction effect. Hence, in order to get good results, the road extraction should combine multiple methods according to the real applications. In the future, how to realize the complete road extraction from a RS image is still an essential but challenging and important research topic.

  10. Ophthalmic imaging features of posterior scleritis

    Directory of Open Access Journals (Sweden)

    Zhi Li

    2014-07-01

    Full Text Available AIM: To analyze, summarize and describe ophthalmic imaging features of posterior scleritis. METHODS: Clinical data of 16 patients(21 eyeswith posterior scleritis diagnosed in our hospital from October 2008 to June 2013 were retrospectively analyzed. The results of type-B ultrasonic, fundus chromophotograph, fundus fluorescein angiography, CT were recorded for comprehensive evaluation and analysis of ophthalmic imaging features of posterior scleritis. RESULTS: All patients underwent type-B ultrasonic examination and manifested as diffuse and nodular types. The diffuse type showed diffusely thickened sclera and a dark hypoechoic area that connected with the optic nerve to form a typical “T”-shaped sign. The nodular type showed scleral echogenic nodules and relatively regular internal structure. FFA showed that relatively weak mottled fluorescences were visible in the arterial early phase and strong multiple needle-like fluorescences were visible in the arteriovenous phase, which were then progressively larger and fused; fluorescein was leaked to the subretinal tissue in the late phase; varying degrees of strong fluorescences with less clear or unclear boundaries were visible in the optic disk. CT results showed thickened eyeball wall. CONCLUSION: Posterior scleritis is common in young female patients, whose ophthalmic imaging features are varied and more specific in type-B ultrasonic. Selection of rational ophthalmic imaging examination method, combined with clinical manifestations, can accurately diagnose posterior scleritis and avoid the incidence of missed and delayed diagnosis.

  11. Geometrically invariant color image watermarking scheme using feature points

    Institute of Scientific and Technical Information of China (English)

    WANG XiangYang; MENG Lan; YANG HongYing

    2009-01-01

    Geometric distortion is known as one of the most difficult attacks to resist.Geometric distortion desynchronizes the location of the watermark and hence causes incorrect watermark detection.In this paper,we propose a geometrically invariant digital watermarking method for color images.In order to synchronize the location for watermark insertion and detection,we use a multi-scale Harris-Laplace detector,by which feature points of a color image can be extracted that are invariant to geometric distortions.Then,the self-adaptive local image region (LIR) detection based on the feature scale theory was considered for watermarking.At each local image region,the watermark is embedded after image normalization.By binding digital watermark with invariant image regions,resilience against geometric distortion can be readily obtained.Our method belongs to the category of blind watermarking techniques,because we do not need the original image during detection.Experimental results show that the proposed color image watermarking is not only invisible and robust against common signal processing such as sharpening,noise adding,and JPEG compression,but also robust against the geometric distortions such as rotation,translation,scaling,row or column removal,shearing,and local random bend.

  12. Novel Moment Features Extraction for Recognizing Handwritten Arabic Letters

    Directory of Open Access Journals (Sweden)

    Gheith Abandah

    2009-01-01

    Full Text Available Problem statement: Offline recognition of handwritten Arabic text awaits accurate recognition solutions. Most of the Arabic letters have secondary components that are important in recognizing these letters. However these components have large writing variations. We targeted enhancing the feature extraction stage in recognizing handwritten Arabic text. Approach: In this study, we proposed a novel feature extraction approach of handwritten Arabic letters. Pre-segmented letters were first partitioned into main body and secondary components. Then moment features were extracted from the whole letter as well as from the main body and the secondary components. Using multi-objective genetic algorithm, efficient feature subsets were selected. Finally, various feature subsets were evaluated according to their classification error using an SVM classifier. Results: The proposed approach improved the classification error in all cases studied. For example, the improvements of 20-feature subsets of normalized central moments and Zernike moments were 15 and 10%, respectively. Conclusion/Recommendations: Extracting and selecting statistical features from handwritten Arabic letters, their main bodies and their secondary components provided feature subsets that give higher recognition accuracies compared to the subsets of the whole letters alone.

  13. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voronoi...

  14. 引入纹理特征的多光谱遥感影像海面油膜信息提取%Oil spill information extraction based on textural features and multispectral image

    Institute of Scientific and Technical Information of China (English)

    王晶; 刘湘南

    2013-01-01

    针对单纯依靠光谱特征油膜提取精度低、雷达影像油膜提取易受海况条件及假目标影响的问题,提出了一种结合光谱特征与纹理特征的多光谱遥感影像油膜信息提取方法.以2011年6月蓬莱19-3油田溢油事故为研究对象,选用HJ-1星CCD遥感数据,利用灰度共生矩阵获取影像纹理特征,采用SVM模型对结合纹理特征与光谱特征的影像进行分类,提取出研究区油膜信息,并将分类提取结果与仅依靠光谱特征的SVM模型分类结果进行了比较.结果表明:引入纹理特征的SVM模型分类总精度达到90.29%,比仅依靠光谱特征的分类精度提高了12.41%;纹理特征的参与降低了原影像噪声对分类结果的影响,油膜边缘提取更加清晰,油膜中心呈连续面状分布,引入纹理特征的SVM模型可有效地用于多光谱遥感影像海面油膜信息提取.%The existing methods of oil spill information extraction have many problems.For example,extraction only based on spectral characteristics is difficult to obtain high accuracy,and sea conditions and false targets have seriously influence on the study that depends on radar data.A model combined with textural features and spectral characteristics based on the support vector machine (SV M)classification was designed to extract oil spill information,using H J-1 optical satellite image of Penglai 19-3 oil spill accident in 2011 as study data.At first,textural features were calculated through gray-level cooccurrence matrix,then the model was used to classify and analyze oil spill information extraction accuracy by comparing it with single spectral characteristics classification.The total classification accuracy of the former method has risen to 90.29 %,which was 12.41% higher than the later.Therefore,using this method can reduce noise information and improve the precision of classification.In addition,the marginal area of oil spill looks more clearly and the central area

  15. Evaluation of image features and classification methods for Barrett's cancer detection using VLE imaging

    Science.gov (United States)

    Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.

    2017-03-01

    Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.

  16. Multispectral image fusion based on fractal features

    Science.gov (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  17. EEG signal features extraction based on fractal dimension.

    Science.gov (United States)

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  18. Integrating Color and Spatial Feature for Content-Based Image Retrieval

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper, we present a novel and efficient scheme for extracting, indexing and retrieving color images. Our motivation was to reduce the space overhead of partition-based approaches taking advantage of the fact that only a relatively low number of distinct values of a particular visual feature is present in most images. To extract color feature and build indices into our image database we take into consideration factors such as human color perception and perceptual range, and the image is partitioned into a set of regions by using a simple classifying scheme. The compact color feature vector and the spatial color histogram, which are extracted from the seqmented image region, are used for representing the color and spatial information in the image. We have also developed the region-based distance measures to compare the similarity of two images. Extensive tests on a large image collection were conducted to demonstrate the effectiveness of the proposed approach.

  19. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  20. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  1. Barrett's esophagus: clinical features, obesity, and imaging.

    LENUS (Irish Health Repository)

    Quigley, Eamonn M M

    2011-09-01

    The following includes commentaries on clinical features and imaging of Barrett\\'s esophagus (BE); the clinical factors that influence the development of BE; the influence of body fat distribution and central obesity; the role of adipocytokines and proinflammatory markers in carcinogenesis; the role of body mass index (BMI) in healing of Barrett\\'s epithelium; the role of surgery in prevention of carcinogenesis in BE; the importance of double-contrast esophagography and cross-sectional images of the esophagus; and the value of positron emission tomography\\/computed tomography.

  2. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    Institute of Scientific and Technical Information of China (English)

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  3. Pattern representation in feature extraction and classifier design: matrix versus vector.

    Science.gov (United States)

    Wang, Zhe; Chen, Songcan; Liu, Jun; Zhang, Daoqiang

    2008-05-01

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern- oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems.

  4. Special feature on imaging systems and techniques

    Science.gov (United States)

    Yang, Wuqiang; Giakos, George

    2013-07-01

    The IEEE International Conference on Imaging Systems and Techniques (IST'2012) was held in Manchester, UK, on 16-17 July 2012. The participants came from 26 countries or regions: Austria, Brazil, Canada, China, Denmark, France, Germany, Greece, India, Iran, Iraq, Italy, Japan, Korea, Latvia, Malaysia, Norway, Poland, Portugal, Sweden, Switzerland, Taiwan, Tunisia, UAE, UK and USA. The technical program of the conference consisted of a series of scientific and technical sessions, exploring physical principles, engineering and applications of new imaging systems and techniques, as reflected by the diversity of the submitted papers. Following a rigorous review process, a total of 123 papers were accepted, and they were organized into 30 oral presentation sessions and a poster session. In addition, six invited keynotes were arranged. The conference not only provided the participants with a unique opportunity to exchange ideas and disseminate research outcomes but also paved a way to establish global collaboration. Following the IST'2012, a total of 55 papers, which were technically extended substantially from their versions in the conference proceeding, were submitted as regular papers to this special feature of Measurement Science and Technology . Following a rigorous reviewing process, 25 papers have been finally accepted for publication in this special feature and they are organized into three categories: (1) industrial tomography, (2) imaging systems and techniques and (3) image processing. These papers not only present the latest developments in the field of imaging systems and techniques but also offer potential solutions to existing problems. We hope that this special feature provides a good reference for researchers who are active in the field and will serve as a catalyst to trigger further research. It has been our great pleasure to be the guest editors of this special feature. We would like to thank the authors for their contributions, without which it would

  5. Change detection in high resolution SAR images based on multiscale texture features

    Science.gov (United States)

    Wen, Caihuan; Gao, Ziqiang

    2011-12-01

    This paper studied on change detection algorithm of high resolution (HR) Synthetic Aperture Radar (SAR) images based on multi-scale texture features. Firstly, preprocessed multi-temporal Terra-SAR images were decomposed by 2-D dual tree complex wavelet transform (DT-CWT), and multi-scale texture features were extracted from those images. Then, log-ratio operation was utilized to get difference images, and the Bayes minimum error theory was used to extract change information from difference images. Lastly, precision assessment was done. Meanwhile, we compared with the result of method based on texture features extracted from gray-level cooccurrence matrix (GLCM). We had a conclusion that, change detection algorithm based on multi-scale texture features has a great more improvement, which proves an effective method to change detect of high spatial resolution SAR images.

  6. Multi-scale contrast enhancement of oriented features in 2D images using directional morphology

    Science.gov (United States)

    Das, Debashis; Mukhopadhyay, Susanta; Praveen, S. R. Sai

    2017-01-01

    This paper presents a multi-scale contrast enhancement scheme for improving the visual quality of directional features present in 2D gray scale images. Directional morphological filters are employed to locate and extract the scale-specific image features with different orientations which are subsequently stored in a set of feature images. The final enhanced image is constructed by weighted combination of these feature images with the original image. While construction, the feature images corresponding to progressively smaller scales are made to have higher proportion of contribution through the use of progressively larger weights. The proposed method has been formulated, implemented and executed on a set of real 2D gray scale images with oriented features. The experimental results visually establish the efficacy of the method. The proposed method has been compared with other similar methods both on subjective and objective basis and the overall performance is found to be satisfactory.

  7. Surface Electromyography Feature Extraction Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Farzaneh Akhavan Mahdavi

    2012-12-01

    Full Text Available Considering the vast variety of EMG signal applications such as rehabilitation of people suffering from some mobility limitations, scientists have done much research on EMG control system. In this regard, feature extraction of EMG signal has been highly valued as a significant technique to extract the desired information of EMG signal and remove unnecessary parts. In this study, Wavelet Transform (WT has been applied as the main technique to extract Surface EMG (SEMG features because WT is consistent with the nature of EMG as a nonstationary signal. Furthermore, two evaluation criteria, namely, RES index (the ratio of a Euclidean distance to a standard deviation and scatter plot are recruited to investigate the efficiency of wavelet feature extraction. The results illustrated an improvement in class separability of hand movements in feature space. Accordingly, it has been shown that only the SEMG features extracted from first and second level of WT decomposition by second order of Daubechies family (db2 yielded the best class separability.

  8. An Algorithm of Image Contrast Enhancement Based on Pixels Neighborhood’s Local Feature

    Directory of Open Access Journals (Sweden)

    Chen Yan

    2013-12-01

    Full Text Available In this study, we proposed an algorithm of Image Contrast enhancement based on local feature to acquire edge information of image, remove Ray Imaging noise and overcome edge blurry and other defects. This method can extract edge features and finish contrast enhancement in varying degrees for pixels neighborhood with different characteristics by using neighborhood local variance and complexity function, which can achieve local features enhancement. The stimulation shows that the method can not only enhance the contrast of the entire image, but also effectively preserves image edge information and improve image quality.

  9. Imaging features of juxtacortical chondroma in children

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Stephen F. [St. Jude Children' s Research Hospital, Department of Radiological Sciences, Memphis, TN (United States)

    2014-01-15

    Juxtacortical chondroma is a rare benign bone lesion in children. Children usually present with a mildly painful mass, which prompts diagnostic imaging studies. The rarity of this condition often presents a diagnostic challenge. Correct diagnosis is crucial in guiding surgical management. To describe the characteristic imaging findings of juxtacortical chondroma in children. We identified all children who were diagnosed with juxtacortical chondroma between 1998 and 2012. A single experienced pediatric radiologist reviewed all diagnostic imaging studies, including plain radiographs, CT, MR and bone scans. Seven children (5 boys and 2 girls) with juxtacortical chondroma were identified, ranging in age from 6 years to 16 years (mean 12.3 years). Mild pain and a palpable mass were present in all seven children. Plain radiographs were available in 6/7, MR in 7/7, CT in 4/7 and skeletal scintigraphy in 5/7 children. Three lesions were located in the proximal humerus, with one each in the distal radius, distal femur, proximal tibia and scapula. Radiographic and CT features deemed highly suggestive of juxtacortical chondroma included cortical scalloping, underlying cortical sclerosis and overhanging margins. MRI features consistent with juxtacortical chondroma included isointensity to skeletal muscle on T1, marked hyperintensity on T2 and peripheral rim enhancement after contrast agent administration. One of seven lesions demonstrated intramedullary extension, and 2/7 showed adjacent soft-tissue edema. Juxtacortical chondroma is an uncommon benign lesion in children with characteristic features on plain radiographs, CT and MR. Recognition of these features is invaluable in guiding appropriate surgical management. (orig.)

  10. Multiwavelets domain singular value features for image texture classification

    Institute of Scientific and Technical Information of China (English)

    RAMAKRISHNAN S.; SELVAN S.

    2007-01-01

    A new approach based on multiwavelets transformation and singular value decomposition (SVD) is proposed for the classification of image textures. Lower singular values are truncated based on its energy distribution to classify the textures in the presence of additive white Gaussian noise (AWGN). The proposed approach extracts features such as energy, entropy, local homogeneity and max-min ratio from the selected singular values of multiwavelets transformation coefficients of image textures.The classification was carried out using probabilistic neural network (PNN). Performance of the proposed approach was compared with conventional wavelet domain gray level co-occurrence matrix (GLCM) based features, discrete multiwavelets transformation energy based approach, and HMM based approach. Experimental results showed the superiority of the proposed algorithms when compared with existing algorithms.

  11. Single Image Superresolution via Directional Group Sparsity and Directional Features.

    Science.gov (United States)

    Li, Xiaoyan; He, Hongjie; Wang, Ruxin; Tao, Dacheng

    2015-09-01

    Single image superresolution (SR) aims to construct a high-resolution version from a single low-resolution (LR) image. The SR reconstruction is challenging because of the missing details in the given LR image. Thus, it is critical to explore and exploit effective prior knowledge for boosting the reconstruction performance. In this paper, we propose a novel SR method by exploiting both the directional group sparsity of the image gradients and the directional features in similarity weight estimation. The proposed SR approach is based on two observations: 1) most of the sharp edges are oriented in a limited number of directions and 2) an image pixel can be estimated by the weighted averaging of its neighbors. In consideration of these observations, we apply the curvelet transform to extract directional features which are then used for region selection and weight estimation. A combined total variation regularizer is presented which assumes that the gradients in natural images have a straightforward group sparsity structure. In addition, a directional nonlocal means regularization term takes pixel values and directional information into account to suppress unwanted artifacts. By assembling the designed regularization terms, we solve the SR problem of an energy function with minimal reconstruction error by applying a framework of templates for first-order conic solvers. The thorough quantitative and qualitative results in terms of peak signal-to-noise ratio, structural similarity, information fidelity criterion, and preference matrix demonstrate that the proposed approach achieves higher quality SR reconstruction than the state-of-the-art algorithms.

  12. Mass-like extramedullary hematopoiesis: imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Ginzel, Andrew W. [Synergy Radiology Associates, Houston, TX (United States); Kransdorf, Mark J.; Peterson, Jeffrey J.; Garner, Hillary W. [Mayo Clinic, Department of Radiology, Jacksonville, FL (United States); Murphey, Mark D. [American Institute for Radiologic Pathology, Silver Spring, MD (United States)

    2012-08-15

    To report the imaging appearances of mass-like extramedullary hematopoiesis (EMH), to identify those features that are sufficiently characteristic to allow a confident diagnosis, and to recognize the clinical conditions associated with EMH and the relative incidence of mass-like disease. We retrospectively identified 44 patients with EMH; 12 of which (27%) had focal mass-like lesions and formed the study group. The study group consisted of 6 male and 6 female subjects with a mean age of 58 years (range 13-80 years). All 12 patients underwent CT imaging and 3 of the 12 patients had undergone additional MR imaging. The imaging characteristics of the extramedullary hematopoiesis lesions in the study group were analyzed and recorded. The patient's clinical presentation, including any condition associated with extramedullary hematopoiesis, was also recorded. Ten of the 12 (83%) patients had one or more masses located along the axial skeleton. Of the 10 patients with axial masses, 9 (90%) had multiple masses and 7 (70%) demonstrated internal fat. Eight patients (80%) had paraspinal masses and 4 patients (40%) had presacral masses. Seven patients (70%) had splenomegaly. Eleven of the 12 patients had a clinical history available for review. A predisposing condition for extramedullary hematopoiesis was present in 10 patients and included various anemias (5 cases; 45%), myelofibrosis/myelodysplastic syndrome (4 cases; 36%), and marrow proliferative disorder (1 case; 9%). One patient had no known predisposing condition. Mass-like extramedullary hematopoiesis most commonly presents as multiple, fat-containing lesions localized to the axial skeleton. When these imaging features are identified, extramedullary hematopoiesis should be strongly considered, particularly when occurring in the setting of a predisposing medical condition. (orig.)

  13. Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms

    OpenAIRE

    Turroni, Francesco

    2012-01-01

    The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerp...

  14. Imaging internal features of whole, unfixed bacteria.

    Science.gov (United States)

    Thomson, Nicholas M; Channon, Kevin; Mokhtar, Noor Azlin; Staniewicz, Lech; Rai, Ranjana; Roy, Ipsita; Sato, Shun; Tsuge, Takeharu; Donald, Athene M; Summers, David; Sivaniah, Easan

    2011-01-01

    Wet scanning-transmission electron microscopy (STEM) is a technique that allows high-resolution transmission imaging of biological samples in a hydrated state, with minimal sample preparation. However, it has barely been used for the study of bacterial cells. In this study, we present an analysis of the advantages and disadvantages of wet STEM compared with standard transmission electron microscopy (TEM). To investigate the potential applications of wet STEM, we studied the growth of polyhydroxyalkanoate and triacylglycerol carbon storage inclusions. These were easily visible inside cells, even in the early stages of accumulation. Although TEM produces higher resolution images, wet STEM is useful when preservation of the sample is important or when studying the relative sizes of different features, since samples do not need to be sectioned. Furthermore, under carefully selected conditions, it may be possible to maintain cell viability, enabling new types of experiments to be carried out. To our knowledge, internal features of bacterial cells have not been imaged previously by this technique.

  15. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  16. Hardwood species classification with DWT based hybrid texture feature extraction techniques

    Indian Academy of Sciences (India)

    Arvind R Yadav; R S Anand; M L Dewal; Sangeeta Gupta

    2015-12-01

    In this work, discrete wavelet transform (DWT) based hybrid texture feature extraction techniques have been used to categorize the microscopic images of hardwood species into 75 different classes. Initially, the DWT has been employed to decompose the image up to 7 levels using Daubechies (db3) wavelet as decomposition filter. Further, first-order statistics (FOS) and four variants of local binary pattern (LBP) descriptors are used to acquire distinct features of these images at various levels. The linear support vector machine (SVM), radial basis function (RBF) kernel SVM and random forest classifiers have been employed for classification. The classification accuracy obtained with state-of-the-art and DWT based hybrid texture features using various classifiers are compared. The DWT based FOS-uniform local binary pattern (DWTFOSLBPu2) texture features at the 4th level of image decomposition have produced best classification accuracy of 97.67 ± 0.79% and 98.40 ± 064% for grayscale and RGB images, respectively, using linear SVM classifier. Reduction in feature dataset by minimal redundancy maximal relevance (mRMR) feature selection method is achieved and the best classification accuracy of 99.00 ± 0.79% and 99.20 ± 0.42% have been obtained for DWT based FOS-LBP histogram Fourier features (DWTFOSLBP-HF) technique at the 5th and 6th levels of image decomposition for grayscale and RGB images, respectively, using linear SVM classifier. The DWTFOSLBP-HF features selected with mRMR method has also established superiority amongst the DWT based hybrid texture feature extraction techniques for randomly divided database into different proportions of training and test datasets.

  17. AN EFFICIENT APPROACH FOR EXTRACTION OF LINEAR FEATURES FROM HIGH RESOLUTION INDIAN SATELLITE IMAGERIES

    Directory of Open Access Journals (Sweden)

    DK Bhattacharyya

    2010-07-01

    Full Text Available This paper presents an Object oriented feature extraction approach in order to classify the linear features like drainage, roads etc. from high resolution Indian satellite imageries. It starts with the multiresolution segmentations of image objects for optimal separation and representation of image regions or objects. Fuzzy membership functions were defined for a selected set of image object parameters such as mean, ratio, shape index, area etc. for representation of required image objects. Experiment was carried out for both panchromatic (CARTOSAT-I and multispectral (IRSP6 LISS IV Indiansatellite imageries. Experimental results show that the extractionof linear features can be achieved in a satisfactory level throughproper segmentation and appropriate definition & representationof key parameters of image objects.

  18. Towards Home-Made Dictionaries for Musical Feature Extraction

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2003-01-01

    The majority of musical feature extraction applications are based on the Fourier transform in various disguises. This is despite the fact that this transform is subject to a series of restrictions, which admittedly ease the computation and interpretation of transform coefficients, but also imposes...... arguably unnecessary limitations on the ability of the transform to extract and identify features. However, replacing the nicely structured dictionary of the Fourier transform (or indeed other nice transform such as the wavelet transform) with a home-made dictionary is a dangerous task, since even the most...

  19. Hyperspectral Image Classification Based on the Weighted Probabilistic Fusion of Multiple Spectral-spatial Features

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-08-01

    Full Text Available A hyperspectral images classification method based on the weighted probabilistic fusion of multiple spectral-spatial features was proposed in this paper. First, the minimum noise fraction (MNF approach was employed to reduce the dimension of hyperspectral image and extract the spectral feature from the image, then combined the spectral feature with the texture feature extracted based on gray level co-occurrence matrix (GLCM, the multi-scale morphological feature extracted based on OFC operator and the end member feature extracted based on sequential maximum angle convex cone (SMACC method to form three spectral-spatial features. Afterwards, support vector machine (SVM classifier was used for the classification of each spectral-spatial feature separately. Finally, we established the weighted probabilistic fusion model and applied the model to fuse the SVM outputs for the final classification result. In order to verify the proposed method, the ROSIS and AVIRIS image were used in our experiment and the overall accuracy reached 97.65% and 96.62% separately. The results indicate that the proposed method can not only overcome the limitations of traditional single-feature based hyperspectral image classification, but also be superior to conventional VS-SVM method and probabilistic fusion method. The classification accuracy of hyperspectral images was improved effectively.

  20. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  1. Content-Based Digital Image Retrieval based on Multi-Feature Amalgamation

    Directory of Open Access Journals (Sweden)

    Linhao Li

    2013-12-01

    Full Text Available In actual implementation, digital image retrieval are facing all kinds of problems. There still exists some difficulty in measures and methods for application. Currently there is not a unambiguous algorithm which can directly shown the obvious feature of image content and satisfy the color, scale invariance and rotation invariance of feature simultaneously. So the related technology about image retrieval based on content is analyzed by us. The research focused on global features such as seven HU invariant moments, edge direction histogram and eccentricity. The method for blocked image is also discussed. During the process of image matching, the extracted image features are looked as the points in vector space. The similarity of two images is measured through the closeness between two points and the similarity is calculated by Euclidean distance and the intersection distance of histogram. Then a novel method based on multi-features amalgamation is proposed, to solve the problems in retrieval method for global feature and local feature. It extracts the eccentricity, seven HU invariant moments and edge direction histogram to calculate the similarity distance of each feature of the images, then they are normalized. Contraposing the interior of global feature the weighted feature distance is adopted to form similarity measurement function for retrieval. The features of blocked images are extracted with the partitioning method based on polar coordinate. Finally by the idea of hierarchical retrieval between global feature and local feature, the results are output through global features like invariant moments etc. These results will be taken as the input of local feature match for the second-layer retrieval, which can improve the accuracy of retrieval effectively.

  2. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.

  3. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...

  4. Discriminative tonal feature extraction method in mandarin speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2007-01-01

    To utilize the supra-segmental nature of Mandarin tones, this article proposes a feature extraction method for hidden markov model (HMM) based tone modeling. The method uses linear transforms to project F0 (fundamental frequency) features of neighboring syllables as compensations, and adds them to the original F0 features of the current syllable. The transforms are discriminatively trained by using an objective function termed as "minimum tone error", which is a smooth approximation of tone recognition accuracy. Experiments show that the new tonal features achieve 3.82% tone recognition rate improvement, compared with the baseline, using maximum likelihood trained HMM on the normal F0 features. Further experiments show that discriminative HMM training on the new features is 8.78% better than the baseline.

  5. GFF-Ex: a genome feature extraction package

    OpenAIRE

    Rastogi, Achal; Gupta, Dinesh

    2014-01-01

    Background Genomic features of whole genome sequences emerging from various sequencing and annotation projects are represented and stored in several formats. Amongst these formats, the GFF (Generic/General Feature Format) has emerged as a widely accepted, portable and successfully used flat file format for genome annotation storage. With an increasing interest in genome annotation projects and secondary and meta-analysis, there is a need for efficient tools to extract sequences of interests f...

  6. Data Feature Extraction for High-Rate 3-Phase Data

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  7. Imaging features of foot osteoid osteoma

    Energy Technology Data Exchange (ETDEWEB)

    Shukla, Satyen; Clarke, Andrew W.; Saifuddin, Asif [Royal National Orthopaedic Hospital NHS Trust, Department of Radiology, Stanmore, Middlesex (United Kingdom)

    2010-07-15

    We performed a retrospective review of the imaging of nine patients with a diagnosis of foot osteoid osteoma (OO). Radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) had been performed in all patients. Radiographic features evaluated were the identification of a nidus and cortical thickening. CT features noted were nidus location (affected bone - intramedullary, intracortical, subarticular) and nidus calcification. MRI features noted were the presence of an identifiable nidus, presence and grade of bone oedema and whether a joint effusion was identified. Of the nine patients, three were female and six male, with a mean age of 21 years (range 11-39 years). Classical symptoms of OO (night pain, relief with aspirin) were identified in five of eight (62.5%) cases (in one case, the medical records could not be retrieved). In five patients the lesion was located in the hindfoot (four calcaneus, one talus), while four were in the mid- or forefoot (two metatarsal and two phalangeal). Radiographs were normal in all patients with hindfoot OO. CT identified the nidus in all cases (89%) except one terminal phalanx lesion, while MRI demonstrated a nidus in six of nine cases (67%). The nidus was of predominantly intermediate signal intensity on T1-weighted (T1W) sequences, with intermediate to high signal intensity on T2-weighted (T2W) sequences. High-grade bone marrow oedema, limited to the affected bone and adjacent soft tissue oedema was identified in all cases. In a young patient with chronic hindfoot pain and a normal radiograph, MRI features suggestive of possible OO include extensive bone marrow oedema limited to one bone, with a possible nidus demonstrated in two-thirds of cases. The presence or absence of a nidus should be confirmed with high-resolution CT. (orig.)

  8. Low-Level Color and Texture Feature Extraction of Coral Reef Components

    Directory of Open Access Journals (Sweden)

    Ma. Sheila Angeli Marcos

    2003-06-01

    Full Text Available The purpose of this study is to develop a computer-based classifier that automates coral reef assessmentfrom digitized underwater video. We extract low-level color and texture features from coral images toserve as input to a high-level classifier. Low-level features for color were labeled blue, green, yellow/brown/orange, and gray/white, which are described by the normalized chromaticity histograms of thesemajor colors. The color matching capability of these features was determined through a technique called“Histogram Backprojection”. The low-level texture feature marks a region as coarse or fine dependingon the gray-level variance of the region.

  9. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    Science.gov (United States)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  10. Feature selection with the image grand tour

    Science.gov (United States)

    Marchette, David J.; Solka, Jeffrey L.

    2000-08-01

    The grand tour is a method for visualizing high dimensional data by presenting the user with a set of projections and the projected data. This idea was extended to multispectral images by viewing each pixel as a multidimensional value, and viewing the projections of the grand tour as an image. The user then looks for projections which provide a useful interpretation of the image, for example, separating targets from clutter. We discuss a modification of this which allows the user to select convolution kernels which provide useful discriminant ability, both in an unsupervised manner as in the image grand tour, or in a supervised manner using training data. This approach is extended to other window-based features. For example, one can define a generalization of the median filter as a linear combination of the order statistics within a window. Thus the median filter is that projection containing zeros everywhere except for the middle value, which contains a one. Using the convolution grand tour one can select projections on these order statistics to obtain new nonlinear filters.

  11. Incorporating global information in feature-based multimodal image registration

    Science.gov (United States)

    Li, Yong; Stevenson, Robert

    2014-03-01

    A multimodal image registration framework based on searching the best matched keypoints and the incorporation of global information is proposed. It comprises two key elements: keypoint detection and an iterative process. Keypoints are detected from both the reference and test images. For each test keypoint, a number of reference keypoints are chosen as mapping candidates. A triplet of keypoint mappings determine an affine transformation that is evaluated using a similarity metric between the reference image and the transformed test image by the determined transformation. An iterative process is conducted on triplets of keypoint mappings, keeping track of the best matched reference keypoint. Random sample consensus and mutual information are applied to eliminate outlier keypoint mappings. The similarity metric is defined to be the number of overlapped edge pixels over the entire images, allowing for global information to be incorporated in the evaluation of triplets of mappings. The performance of the framework is investigated with keypoints extracted by scale invariant feature transform and partial intensity invariant feature descriptor. Experimental results show that the proposed framework can provide more accurate registration than existing methods.

  12. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  13. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon be

  14. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is te...

  15. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  16. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  17. METHOD TO EXTRACT BLEND SURFACE FEATURE IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    Lü Zhen; Ke Yinglin; Sun Qing; Kelvin W; Huang Xiaoping

    2003-01-01

    A new method of extraction of blend surface feature is presented. It contains two steps: segmentation and recovery of parametric representation of the blend. The segmentation separates the points in the blend region from the rest of the input point cloud with the processes of sampling point data, estimation of local surface curvature properties and comparison of maximum curvature values. The recovery of parametric representation generates a set of profile curves by marching throughout the blend and fitting cylinders. Compared with the existing approaches of blend surface feature extraction, the proposed method reduces the requirement of user interaction and is capable of extracting blend surface with either constant radius or variable radius. Application examples are presented to verify the proposed method.

  18. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  19. Feature extraction from slice data for reverse engineering

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yingjie; LU Shangning

    2007-01-01

    A new approach to feature extraction for slice data points is presented. The reconstruction of objects is performed as follows. First, all contours in each slice are extracted by contour tracing algorithms. Then the data points on the contours are analyzed, and the curve segments of the contours are divided into three categories: straight lines, conic curves and B-spline curves. The curve fitting methods are applied for each curve segment to remove the unwanted points with pre-determined tolerance. Finally, the features, which consist of the object and connection relations among them, are founded by matching the corresponding contours in adjacent slices, and 3D models are reconstructed based on the features. The proposed approach has been implemented in OpenGL, and the feasibility of the proposed method has been verified by several cases.

  20. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... the complexity of hand-crafting feature extractors that combine information across dissimilar modalities of input. Frequent sequence mining is presented as a method to learn feature extractors that fuse physiological and contextual information. This method is evaluated in a game-based dataset and compared...

  1. Unsupervised Skin cancer detection by combination of texture and shape features in dermoscopy images

    Directory of Open Access Journals (Sweden)

    Hamed aghapanah rudsari

    2014-05-01

    Full Text Available In this paper a novel unsupervised feature extraction method for detection of melanoma in skin images is presented. First of all, normal skin surrounding the lesion is removed in a segmentation process. In the next step, some shape and texture features are extracted from the output image of the first step: GLCM, GLRLM, the proposed directional-frequency features, and some parameters of Ripplet transform are used as texture features; Also, NRL features and Zernike moments are used as shape features. Totally, 63 texture features and 31 shape features are extracted. Finally, the number of extracted features is reduced using PCA method and a proposed method based on Fisher criteria. Extracted features are classified using the Perceptron Neural Networks, Support Vector Machine, 4-NN, and Naïve Bayes. The results show that SVM has the best performance. The proposed algorithm is applied on a database that consists of 160 labeled images. The overall results confirm the superiority of the proposed method in both accuracy and reliability over previous works.

  2. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  3. Performance Evaluation of Content Based Image Retrieval on Feature Optimization and Selection Using Swarm Intelligence

    Directory of Open Access Journals (Sweden)

    Kirti Jain

    2016-03-01

    Full Text Available The diversity and applicability of swarm intelligence is increasing everyday in the fields of science and engineering. Swarm intelligence gives the features of the dynamic features optimization concept. We have used swarm intelligence for the process of feature optimization and feature selection for content-based image retrieval. The performance of content-based image retrieval faced the problem of precision and recall. The value of precision and recall depends on the retrieval capacity of the image. The basic raw image content has visual features such as color, texture, shape and size. The partial feature extraction technique is based on geometric invariant function. Three swarm intelligence algorithms were used for the optimization of features: ant colony optimization, particle swarm optimization (PSO, and glowworm optimization algorithm. Coral image dataset and MatLab software were used for evaluating performance.

  4. Magnetic Resonance Imaging Features of Neuromyelitis Optica

    Energy Technology Data Exchange (ETDEWEB)

    You, Sun Kyung; Song, Chang June; Park, Woon Ju; Lee, In Ho; Son, Eun Hee [Chungnam National University College of Medicine, Chungnam National University Hospital, Daejeon (Korea, Republic of)

    2013-03-15

    To report the magnetic resonance (MR) imaging features of the spinal cord and brain in patients of neuromyelitis optica (NMO). Between January 2001 and March 2010, the MR images (spinal cord, brain, and orbit) and the clinical and serologic findings of 11 NMO patients were retrospectively reviewed. The contrast-enhancement of the spinal cord was performed (20/23). The presence and pattern of the contrast-enhancement in the spinal cord were classified into 5 types. Acute myelitis was monophasic in 8 patients (8/11, 72.7%); and optic neuritis preceded acute myelitis in most patients. Longitudinally extensive cord lesion (average, 7.3 vertebral segments) was involved. The most common type was the diffuse and subtle enhancement of the spinal cord with a multifocal nodular, linear or segmental intense enhancement (45%). Most of the brain lesions (5/11, 10 lesions) were located in the brain stem, thalamus and callososeptal interphase. Anti-Ro autoantibody was positive in 2 patients, and they showed a high relapse rate of acute myelitis. Anti-NMO IgG was positive in 4 patients (4/7, 66.7%). The imaging findings of acute myelitis in NMO may helpful in making an early diagnosis of NMO which can result in a severe damage to the spinal cord, and to make a differential diagnosis of multiple sclerosis and inflammatory diseases of the spinal cord such as toxocariasis.

  5. Robust Colour Image Watermarking Scheme Based on Feature Points and Image Normalization in DCT Domain

    Directory of Open Access Journals (Sweden)

    Ibrahim Alsonosi Nasir

    2014-04-01

    Full Text Available Geometric attacks can desynchronize the location of the watermark and hence cause incorrect watermark detection. This paper presents a robust c olour image watermarking scheme based on visually significant feature points and image norma lization technique. The feature points are used as synchronization marks between watermark emb edding and detection. The watermark is embedded into the non overlapped normalized circula r regions in the luminance component or the blue component of a color image. The embedding of the watermark is carried out by modifying the DCT coefficients values in selected b locks. The original unmarked image is not required for watermark extraction Experimental results show that the proposed scheme successfully makes the watermark perceptually invis ible as well as robust to common signal processing and geometric attacks.

  6. 基于图像重建的Zernike矩形状特征评价%Image Zernike Moments Shape Feature Evaluation Based on Image Reconstruction

    Institute of Scientific and Technical Information of China (English)

    刘茂福; 何炎祥; 叶斌

    2007-01-01

    The evaluation approach to the accuracy of the image feature descriptors plays an important role in image feature extraction. We point out that the image shape feature can be described by the Zernike moments set while briefly introducing the basic concept of the Zernike moment. After talking about the image reconstruction technique based on the inverse transformation of Zernike moment, the evaluation approach to the accuracy of the Zernike moments shape feature via the dissimilarity degree and the reconstruction ratio between the original image and the reconstructed image is proposed. The experiment results demonstrate the feasibility of this evaluation approach to image Zernike moments shape feature.

  7. Spectrum Feature Retrieval and Comparison of Remote Sensing Images Using Improved ISODATA Algorithm

    Institute of Scientific and Technical Information of China (English)

    刘磊; 敬忠良; 肖刚

    2004-01-01

    Due to the large quantities of data and high relativity of the spectra of remote sensing images, K-L transformation is used to eliminate the relativity. An improved ISODATA(Interative Self-Organizing Data Analysis Technique A) algorithm is used to extract the spectrum features of the images. The computation is greatly reduced and the dynamic arguments are realized. The comparison of features between two images is carried out, and good results are achieved in simulation.

  8. Extraction of Spatial-Temporal Features for Vision-Based Gesture Recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG YU; XU Guangyou; ZHU Yuanxin

    2000-01-01

    One of the key problems in a vision-based gesture recognition system is the extraction of spatial-temporal features of gesturing.In this paper an approach of motion-based segmentation is proposed to realize this task.The direct method cooperated with the robust M-estimator to estimate the affine parameters of gesturing motion is used, and based on the dominant motion model the gesturing region is extracted, i.e.,the dominant object. So the spatial-temporal features of gestures can be extracted. Finally, the dynamic time warping (DTW) method is directly used to perform matching of 12 control gestures (6 for"translation"orders,6 for"rotation"orders).A small demonstration system has been set up to verify the method, in which a panorama image viewer can be controlled (set by mosaicing a sequence of standard"Garden"images) with recognized gestures instead of the 3-D mouse tool.

  9. Glioma grading using cell nuclei morphologic features in digital pathology images

    Science.gov (United States)

    Reza, Syed M. S.; Iftekharuddin, Khan M.

    2016-03-01

    This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.

  10. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  11. Multilingual Artificial Text Extraction and Script Identification from Video Images

    Directory of Open Access Journals (Sweden)

    Akhtar Jamil

    2016-04-01

    Full Text Available This work presents a system for extraction and script identification of multilingual artificial text appearing in video images. As opposed to most of the existing text extraction systems which target textual occurrences in a particular script or language, we have proposed a generic multilingual text extraction system that relies on a combination of unsupervised and supervised techniques. The unsupervised approach is based on application of image analysis techniques which exploit the contrast, alignment and geometrical properties of text and identify candidate text regions in an image. Potential text regions are then validated by an Artificial Neural Network (ANN using a set of features computed from Gray Level Co-occurrence Matrices (GLCM. The script of the extracted text is finally identified using texture features based on Local Binary Patterns (LBP. The proposed system was evaluated on video images containing textual occurrences in five different languages including English, Urdu, Hindi, Chinese and Arabic. The promising results of the experimental evaluations validate the effectiveness of the proposed system for text extraction and script identification.

  12. IMAGE LABELING FOR LIDAR INTENSITY IMAGE USING K-NN OF FEATURE OBTAINED BY CONVOLUTIONAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    M. Umemura

    2016-06-01

    Full Text Available We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS using K-Nearest Neighbor (KNN of feature obtained by Convolutional Neural Network (CNN. Image labeling assigns labels (e.g., road, cross-walk and road shoulder to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images and test sets (8 images. We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  13. Image Labeling for LIDAR Intensity Image Using K-Nn of Feature Obtained by Convolutional Neural Network

    Science.gov (United States)

    Umemura, Masaki; Hotta, Kazuhiro; Nonaka, Hideki; Oda, Kazuo

    2016-06-01

    We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS) using K-Nearest Neighbor (KNN) of feature obtained by Convolutional Neural Network (CNN). Image labeling assigns labels (e.g., road, cross-walk and road shoulder) to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet) pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images) and test sets (8 images). We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  14. Micro-Doppler Feature Extraction and Recognition Based on Netted Radar for Ballistic Targets

    Directory of Open Access Journals (Sweden)

    Feng Cun-qian

    2015-12-01

    Full Text Available This study examines the complexities of using netted radar to recognize and resolve ballistic midcourse targets. The application of micro-motion feature extraction to ballistic mid-course targets is analyzed, and the current status of application and research on micro-motion feature recognition is concluded for singlefunction radar networks such as low- and high-resolution imaging radar networks. Advantages and disadvantages of these networks are discussed with respect to target recognition. Hybrid-mode radar networks combine low- and high-resolution imaging radar and provide a specific reference frequency that is the basis for ballistic target recognition. Main research trends are discussed for hybrid-mode networks that apply micromotion feature extraction to ballistic mid-course targets.

  15. Detection and Classification of Cancer from Microscopic Biopsy Images Using Clinically Significant and Biologically Interpretable Features

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Subodh

    2015-01-01

    A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k-means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law's Texture Energy based features, Tamura's features, and wavelet features. Finally, the K-nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images. PMID:27006938

  16. Automatic classification of hepatocellular carcinoma images based on nuclear and structural features

    Science.gov (United States)

    Kiyuna, Tomoharu; Saito, Akira; Marugame, Atsushi; Yamashita, Yoshiko; Ogura, Maki; Cosatto, Eric; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2013-03-01

    Diagnosis of hepatocellular carcinoma (HCC) on the basis of digital images is a challenging problem because, unlike gastrointestinal carcinoma, strong structural and morphological features are limited and sometimes absent from HCC images. In this study, we describe the classification of HCC images using statistical distributions of features obtained from image analysis of cell nuclei and hepatic trabeculae. Images of 130 hematoxylin-eosin (HE) stained histologic slides were captured at 20X by a slide scanner (Nanozoomer, Hamamatsu Photonics, Japan) and 1112 regions of interest (ROI) images were extracted for classification (551 negatives and 561 positives, including 113 well-differentiated positives). For a single nucleus, the following features were computed: area, perimeter, circularity, ellipticity, long and short axes of elliptic fit, contour complexity and gray level cooccurrence matrix (GLCM) texture features (angular second moment, contrast, homogeneity and entropy). In addition, distributions of nuclear density and hepatic trabecula thickness within an ROI were also extracted. To represent an ROI, statistical distributions (mean, standard deviation and percentiles) of these features were used. In total, 78 features were extracted for each ROI and a support vector machine (SVM) was trained to classify negative and positive ROIs. Experimental results using 5-fold cross validation show 90% sensitivity for an 87.8% specificity. The use of statistical distributions over a relatively large area makes the HCC classifier robust to occasional failures in the extraction of nuclear or hepatic trabecula features, thus providing stability to the system.

  17. Active Shape Model of Combining Pca and Ica: Application to Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    DENG Lin; RAO Ni-ni; WANG Gang

    2006-01-01

    Active Shape Model (ASM) is a powerful statistical tool to extract the facial features of a face image under frontal view. It mainly relies on Principle Component Analysis (PCA) to statistically model the variability in the training set of example shapes. Independent Component Analysis (ICA) has been proven to be more efficient to extract face features than PCA . In this paper, we combine the PCA and ICA by the consecutive strategy to form a novel ASM. Firstly, an initial model, which shows the global shape variability in the training set, is generated by the PCA-based ASM. And then, the final shape model, which contains more local characters, is established by the ICA-based ASM. Experimental results verify that the accuracy of facial feature extraction is statistically significantly improved by applying the ICA modes after the PCA modes.

  18. Focal-plane CMOS wavelet feature extraction for real-time pattern recognition

    Science.gov (United States)

    Olyaei, Ashkan; Genov, Roman

    2005-09-01

    Kernel-based pattern recognition paradigms such as support vector machines (SVM) require computationally intensive feature extraction methods for high-performance real-time object detection in video. The CMOS sensory parallel processor architecture presented here computes delta-sigma (ΔΣ)-modulated Haar wavelet transform on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental oversampling analog-to-digital converters (ADCs). Each ADC performs distributed spatial focal-plane sampling and concurrent weighted average quantization. The architecture is benchmarked in SVM face detection on the MIT CBCL data set. At 90% detection rate, first-level Haar wavelet feature extraction yields a 7.9% reduction in the number of false positives when compared to classification with no feature extraction. The architecture yields 1.4 GMACS simulated computational throughput at SVGA imager resolution at 8-bit output depth.

  19. AUOTOMATIC CLASSIFICATION OF POINT CLOUDS EXTRACTED FROM ULTRACAM STEREO IMAGES

    Directory of Open Access Journals (Sweden)

    M. Modiri

    2015-12-01

    Full Text Available Automatic extraction of building roofs, street and vegetation are a prerequisite for many GIS (Geographic Information System applications, such as urban planning and 3D building reconstruction. Nowadays with advances in image processing and image matching technique by using feature base and template base image matching technique together dense point clouds are available. Point clouds classification is an important step in automatic features extraction. Therefore, in this study, the classification of point clouds based on features color and shape are implemented. We use two images by proper overlap getting by Ultracam-x camera in this study. The images are from Yasouj in IRAN. It is semi-urban area by building with different height. Our goal is classification buildings and vegetation in these points. In this article, an algorithm is developed based on the color characteristics of the point’s cloud, using an appropriate DEM (Digital Elevation Model and points clustering method. So that, firstly, trees and high vegetation are classified by using the point’s color characteristics and vegetation index. Then, bare earth DEM is used to separate ground and non-ground points. Non-ground points are then divided into clusters based on height and local neighborhood. One or more clusters are initialized based on the maximum height of the points and then each cluster is extended by applying height and neighborhood constraints. Finally, planar roof segments are extracted from each cluster of points following a region-growing technique.

  20. Auotomatic Classification of Point Clouds Extracted from Ultracam Stereo Images

    Science.gov (United States)

    Modiri, M.; Masumi, M.; Eftekhari, A.

    2015-12-01

    Automatic extraction of building roofs, street and vegetation are a prerequisite for many GIS (Geographic Information System) applications, such as urban planning and 3D building reconstruction. Nowadays with advances in image processing and image matching technique by using feature base and template base image matching technique together dense point clouds are available. Point clouds classification is an important step in automatic features extraction. Therefore, in this study, the classification of point clouds based on features color and shape are implemented. We use two images by proper overlap getting by Ultracam-x camera in this study. The images are from Yasouj in IRAN. It is semi-urban area by building with different height. Our goal is classification buildings and vegetation in these points. In this article, an algorithm is developed based on the color characteristics of the point's cloud, using an appropriate DEM (Digital Elevation Model) and points clustering method. So that, firstly, trees and high vegetation are classified by using the point's color characteristics and vegetation index. Then, bare earth DEM is used to separate ground and non-ground points. Non-ground points are then divided into clusters based on height and local neighborhood. One or more clusters are initialized based on the maximum height of the points and then each cluster is extended by applying height and neighborhood constraints. Finally, planar roof segments are extracted from each cluster of points following a region-growing technique.

  1. 3D FEATURE POINT EXTRACTION FROM LIDAR DATA USING A NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Y. Feng

    2016-06-01

    Full Text Available Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  2. Special object extraction from medieval books using superpixels and bag-of-features

    Science.gov (United States)

    Yang, Ying; Rushmeier, Holly

    2017-01-01

    We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.

  3. Wear Debris Identification Using Feature Extraction and Neural Network

    Institute of Scientific and Technical Information of China (English)

    王伟华; 马艳艳; 殷勇辉; 王成焘

    2004-01-01

    A method and results of identification of wear debris using their morphological features are presented. The color images of wear debris were used as initial data. Each particle was characterized by a set of numerical parameters combined by its shape, color and surface texture features through a computer vision system. Those features were used as input vector of artificial neural network for wear debris identification. A radius basis function (RBF) network based model suitable for wear debris recognition was established,and its algorithm was presented in detail. Compared with traditional recognition methods, the RBF network model is faster in convergence, and higher in accuracy.

  4. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  5. An Efficient Method for Extracting Features from Blurred Fingerprints Using Modified Gabor Filter

    Directory of Open Access Journals (Sweden)

    R.Vinothkanna

    2012-09-01

    Full Text Available Biometrics is the science and technology of measuring and analyzing biological data. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements for authentication purposes. Fingerprint is one of the most developed biometrics, with more history, research and design. Fingerprint recognition identifies people by using the impressions made by the minute ridge formations or patterns found on the fingertips. The extraction of features from blurred or unclear fingerprints becomes difficult. So instead of ridges we tried to extract valleys from same images, because fingerprints consist of both ridges and valleys as its features. We found some good results for valley extraction with different filters including Gabor filter. So in this paper we modified the Gabor filter to reduce the time consumption and also for extraction of more valleys than Gabor filter.

  6. Toward Automated Feature Detection in UAVSAR Images

    Science.gov (United States)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.

    2014-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification is not robust at present: it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Identification and mitigation of nongaussian background image noise is essential to creating a robust, automated system to search for such features. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features.Empirical examination of detrended noise based on 20 km east-west profiles through desert terrain with little tectonic deformation for a suite of flight interferograms shows nongaussian characteristics. Statistical measurement of curvature with varying length scale (Allan variance) shows nearly white behavior (Allan variance slope with spatial distance from roughly -1.76 to -2) from 25 to 400 meters, deviations from -2 suggesting short-range differences (such as used in detecting edges) are often freer of noise than longer-range differences. At distances longer than 400 m the Allan variance flattens out without consistency from one interferogram to another. We attribute this additional noise afflicting difference estimates at longer distances to atmospheric water vapor and uncompensated aircraft motion.Paradoxically, California interferograms made with increasing time intervals before and after the El Mayor Cucapah earthquake (2008, M7.2, Mexico) show visually stronger and more interesting edges, but edge detection methods developed for the first year do not produce reliable results over the first two years, because longer time spans suffer reduced coherence in the interferogram. The changes over time are reflecting fault slip and block

  7. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  8. A Narrative Methodology to Recognize Iris Patterns By Extracting Features Using Gabor Filters and Wavelets

    Directory of Open Access Journals (Sweden)

    Shristi Jha

    2016-01-01

    Full Text Available Iris pattern Recognition is an automated method of biometric identification that uses mathematical pattern-Recognition techniques on images of one or both of the irises of an individual’s eyes, whose complex random patterns are unique, stable, and can be seen from some distance. Iris recognition uses video camera technology with subtle near infrared illumination to acquire images of the detail-rich, intricate structures of the iris which are visible externally. In this narrative research paper the input image is captured and the success of the iris recognition depends on the quality of the image so the captured image is subjected to the preliminary image preprocessing techniques like localization, segmentation, normalization and noise detection followed by texture and edge feature extraction by using Gabor filters and wavelets then the processed image is matched with templates stored in the database to detect the Iris Patterns.

  9. Optimized Feature Extraction for Temperature-Modulated Gas Sensors

    Directory of Open Access Journals (Sweden)

    Alexander Vergara

    2009-01-01

    Full Text Available One of the most serious limitations to the practical utilization of solid-state gas sensors is the drift of their signal. Even if drift is rooted in the chemical and physical processes occurring in the sensor, improved signal processing is generally considered as a methodology to increase sensors stability. Several studies evidenced the augmented stability of time variable signals elicited by the modulation of either the gas concentration or the operating temperature. Furthermore, when time-variable signals are used, the extraction of features can be accomplished in shorter time with respect to the time necessary to calculate the usual features defined in steady-state conditions. In this paper, we discuss the stability properties of distinct dynamic features using an array of metal oxide semiconductors gas sensors whose working temperature is modulated with optimized multisinusoidal signals. Experiments were aimed at measuring the dispersion of sensors features in repeated sequences of a limited number of experimental conditions. Results evidenced that the features extracted during the temperature modulation reduce the multidimensional data dispersion among repeated measurements. In particular, the Energy Signal Vector provided an almost constant classification rate along the time with respect to the temperature modulation.

  10. Color and neighbor edge directional difference feature for image retrieval

    Institute of Scientific and Technical Information of China (English)

    Chaobing Huang; Shengsheng Yu; Jingli Zhou; Hongwei Lu

    2005-01-01

    @@ A novel image feature termed neighbor edge directional difference unit histogram is proposed, in which the neighbor edge directional difference unit is defined and computed for every pixel in the image, and is used to generate the neighbor edge directional difference unit histogram. This histogram and color histogram are used as feature indexes to retrieve color image. The feature is invariant to image scaling and translation and has more powerful descriptive for the natural color images. Experimental results show that the feature can achieve better retrieval performance than other color-spatial features.

  11. Non-rigid registration of medical images based on ordinal feature and manifold learning

    Science.gov (United States)

    Li, Qi; Liu, Jin; Zang, Bo

    2015-12-01

    With the rapid development of medical imaging technology, medical image research and application has become a research hotspot. This paper offers a solution to non-rigid registration of medical images based on ordinal feature (OF) and manifold learning. The structural features of medical images are extracted by combining ordinal features with local linear embedding (LLE) to improve the precision and speed of the registration algorithm. A physical model based on manifold learning and optimization search is constructed according to the complicated characteristics of non-rigid registration. The experimental results demonstrate the robustness and applicability of the proposed registration scheme.

  12. Analysis and Reliability Performance Comparison of Different Facial Image Features

    Directory of Open Access Journals (Sweden)

    J. Madhavan

    2014-11-01

    Full Text Available This study performs reliability analysis on the different facial features with weighted retrieval accuracy on increasing facial database images. There are many methods analyzed in the existing papers with constant facial databases mentioned in the literature review. There were not much work carried out to study the performance in terms of reliability and also how the method will perform on increasing the size of the database. In this study certain feature extraction methods were analyzed on the regular performance measure and also the performance measures are modified to fit the real time requirements by giving weight ages for the closer matches. In this study four facial feature extraction methods are performed, they are DWT with PCA, LWT with PCA, HMM with SVD and Gabor wavelet with HMM. Reliability of these methods are analyzed and reported. Among all these methods Gabor wavelet with HMM gives more reliability than other three methods performed. Experiments are carried out to evaluate the proposed approach on the Olivetti Research Laboratory (ORL face database.

  13. Gradient Algorithm on Stiefel Manifold and Application in Feature Extraction

    Directory of Open Access Journals (Sweden)

    Zhang Jian-jun

    2013-09-01

    Full Text Available To improve the computational efficiency of system feature extraction, reduce the occupied memory space, and simplify the program design, a modified gradient descent method on Stiefel manifold is proposed based on the optimization algorithm of geometry frame on the Riemann manifold. Different geodesic calculation formulas are used for different scenarios. A polynomial is also used to lie close to the geodesic equations. JiuZhaoQin-Horner polynomial algorithm and the strategies of line-searching technique and change of the step size of iteration are also adopted. The gradient descent algorithm on Stiefel manifold applied in Principal Component Analysis (PCA is discussed in detail as an example of system feature extraction. Theoretical analysis and simulation experiments show that the new method can achieve superior performance in both the convergence rate and calculation efficiency while ensuring the unitary column orthogonality. In addition, it is easier to implement by software or hardware.

  14. A Review on Feature Extraction Techniques in Face Recognition

    Directory of Open Access Journals (Sweden)

    Rahimeh Rouhi

    2013-01-01

    Full Text Available Face recognition systems due to their significant application in the security scopes, have been of greatimportance in recent years. The existence of an exact balance between the computing cost, robustness andtheir ability for face recognition is an important characteristic for such systems. Besides, trying to designthe systems performing under different conditions (e.g. illumination, variation of pose, different expressionand etc. is a challenging problem in the feature extraction of the face recognition. As feature extraction isan important step in the face recognition operation, in the present study four techniques of featureextraction in the face recognition were reviewed, subsequently comparable results were presented, andthen the advantages and the disadvantages of these methods were discussed.

  15. Modification of evidence theory based on feature extraction

    Institute of Scientific and Technical Information of China (English)

    DU Feng; SHI Wen-kang; DENG Yong

    2005-01-01

    Although evidence theory has been widely used in information fusion due to its effectiveness of uncertainty reasoning, the classical DS evidence theory involves counter-intuitive behaviors when high conflict information exists. Many modification methods have been developed which can be classified into the following two kinds of ideas, either modifying the combination rules or modifying the evidence sources. In order to make the modification more reasonable and more effective, this paper gives a thorough analysis of some typical existing modification methods firstly, and then extracts the intrinsic feature of the evidence sources by using evidence distance theory. Based on the extracted features, two modified plans of evidence theory according to the corresponding modification ideas have been proposed. The results of numerical examples prove the good performance of the plans when combining evidence sources with high conflict information.

  16. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  17. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  18. Fast Image Retrieval of Textile Industrial Accessory Based on Multi-Feature Fusion

    Institute of Scientific and Technical Information of China (English)

    沈文忠; 杨杰

    2004-01-01

    A hierarchical retrieval scheme of the accessory image database is proposed based on textile industrial accessory contour feature and region feature. At first smallest enclosed rectangle[1] feature (degree of accessory coordination) is used to filter the image database to decouple the image search scope. After the accessory contour information and region information are extracted, the fusion multi-feature of the centroid distance Fourier descriptor and distance distribution histogram is adopted to finish image retrieval accurately. All the features above are invariable under translation, scaling and rotation. Results from the test on the image database including 1,000 accessory images demonstrate that the method is effective and practical with high accuracy and fast speed.

  19. Pleasant/Unpleasant Filtering for Affective Image Retrieval Based on Cross-Correlation of EEG Features

    Directory of Open Access Journals (Sweden)

    Keranmu Xielifuguli

    2014-01-01

    Full Text Available People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.

  20. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    Science.gov (United States)

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  1. Extraction of urban vegetation with Pleiades multiangular images

    Science.gov (United States)

    Lefebvre, Antoine; Nabucet, Jean; Corpetti, Thomas; Courty, Nicolas; Hubert-Moy, Laurence

    2016-10-01

    Vegetation is essential in urban environments since it provides significant services in terms of health, heat, property value, ecology ... As part of the European Union Biodiversity Strategy Plan for 2020, the protection and development of green-infrastructures is strengthened in urban areas. In order to evaluate and monitor the quality of the green infra-structures, this article investigates contributions of Pléiades multi-angular images to extract and characterize low and high urban vegetation. From such images one can extract both spectral and elevation information from optical images. Our method is composed of 3 main steps : (1) the computation of a normalized Digital Surface Model from the multi-angular images ; (2) Extraction of spectral and contextual features ; (3) a classification of vegetation classes (tree and grass) performed with a random forest classifier. Results performed in the city of Rennes in France show the ability of multi-angular images to extract DEM in urban area despite building height. It also highlights its importance and its complementarity with contextual information to extract urban vegetation.

  2. Fast Fractal Image Encoding Based on Special Image Features

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chao; ZHOU Yiming; ZHANG Zengke

    2007-01-01

    The fractal image encoding method has received much attention for its many advantages over other methods,such as high decoding quality at high compression ratios. However, because every range block must be compared to all domain blocks in the codebook to find the best-matched one during the coding procedure, baseline fractal coding (BFC) is quite time consuming. To speed up fractal coding, a new fast fractal encoding algorithm is proposed. This algorithm aims at reducing the size of the search window during the domain-range matching process to minimize the computational cost. A new theorem presented in this paper shows that a special feature of the image can be used to do this work. Based on this theorem, the most inappropriate domain blocks, whose features are not similar to that of the given range block, are excluded before matching. Thus, the best-matched block can be captured much more quickly than in the BFC approachThe experimental results show that the runtime of the proposed method is reduced greatly compared to the BFC method. At the same time,the new algorithm also achieves high reconstructed image quality. In addition,the method can be incorporated with other fast algorithms to achieve better performance.Therefore, the proposed algorithm has a much better application potential than BFC.

  3. FEATURES AND GROUND AUTOMATIC EXTRACTION FROM AIRBORNE LIDAR DATA

    OpenAIRE

    D. Costantino; M. G. Angelini

    2012-01-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and l...

  4. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  5. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  6. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    Science.gov (United States)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  7. Geometric feature extraction by a multimarked point process.

    Science.gov (United States)

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency.

  8. Features and Ground Automatic Extraction from Airborne LIDAR Data

    Science.gov (United States)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  9. Automated feature extraction for 3-dimensional point clouds

    Science.gov (United States)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  10. Feature Extraction and Pattern Identification for Anemometer Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Longji Sun

    2012-01-01

    Full Text Available Cup anemometers are commonly used for wind speed measurement in the wind industry. Anemometer malfunctions lead to excessive errors in measurement and directly influence the wind energy development for a proposed wind farm site. This paper is focused on feature extraction and pattern identification to solve the anemometer condition diagnosis problem of the PHM 2011 Data Challenge Competition. Since the accuracy of anemometers can be severely affected by the environmental factors such as icing and the tubular tower itself, in order to distinguish the cause due to anemometer failures from these factors, our methodologies start with eliminating irregular data (outliers under the influence of environmental factors. For paired data, the relation between the relative wind speed difference and the wind direction is extracted as an important feature to reflect normal or abnormal behaviors of paired anemometers. Decisions regarding the condition of paired anemometers are made by comparing the features extracted from training and test data. For shear data, a power law model is fitted using the preprocessed and normalized data, and the sum of the squared residuals (SSR is used to measure the health of an array of anemometers. Decisions are made by comparing the SSRs of training and test data. The performance of our proposed methods is evaluated through the competition website. As a final result, our team ranked the second place overall in both student and professional categories in this competition.

  11. Motion feature extraction scheme for content-based video retrieval

    Science.gov (United States)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  12. A NOVEL SHAPE BASED FEATURE EXTRACTION TECHNIQUE FOR DIAGNOSIS OF LUNG DISEASES USING EVOLUTIONARY APPROACH

    Directory of Open Access Journals (Sweden)

    C. Bhuvaneswari

    2014-07-01

    Full Text Available Lung diseases are one of the most common diseases that affect the human community worldwide. When the diseases are not diagnosed they may lead to serious problems and may even lead to transience. As an outcome to assist the medical community this study helps in detecting some of the lung diseases specifically bronchitis, pneumonia and normal lung images. In this paper, to detect the lung diseases feature extraction is done by the proposed shape based methods, feature selection through the genetics algorithm and the images are classified by the classifier such as MLP-NN, KNN, Bayes Net classifiers and their performances are listed and compared. The shape features are extracted and selected from the input CT images using the image processing techniques and fed to the classifier for categorization. A total of 300 lung CT images were used, out of which 240 are used for training and 60 images were used for testing. Experimental results show that MLP-NN has an accuracy of 86.75 % KNN Classifier has an accuracy of 85.2 % and Bayes net has an accuracy of 83.4% of classification accuracy. The sensitivity, specificity, F-measures, PPV values for the various classifiers are also computed. This concludes that the MLP-NN outperforms all other classifiers.

  13. Imaging systems and applications: introduction to the feature.

    Science.gov (United States)

    Imai, Francisco H; Linne von Berg, Dale C; Skauli, Torbjørn; Tominaga, Shoji; Zalevsky, Zeev

    2014-05-01

    Imaging systems have numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging system requires the integration of optics, sensing, image processing, and display rendering. This issue features original research ranging from design of stimuli for human perception, optics applications, and image enhancement to novel imaging modalities in both color and infrared spectral imaging, gigapixel imaging as well as a systems perspective to imaging.

  14. Attributed Relational Graph Based Feature Extraction of Body Poses In Indian Classical Dance Bharathanatyam

    Directory of Open Access Journals (Sweden)

    Athira. Sugathan

    2014-05-01

    Full Text Available Articulated body pose estimation in computer vision is an important problem because of convolution of the models. It is useful in real time applications such as surveillance camera, computer games, human computer interaction etc. Feature extraction is the main part in pose estimation which helps for a successful classification. In this paper, we propose a system for extracting the features from the relational graph of articulated upper body poses of basic Bharatanatyam steps, each performed by different persons of different experiences and size. Our method has the ability to extract features from an attributed relational graph from challenging images with background clutters, clothing diversity, illumination etc. The system starts with skeletonization process which determines the human pose and increases the smoothness using B-Spline approach. Attributed relational graph is generated and the geometrical features are extracted for the correct discrimination between shapes that can be useful for classification and annotation of dance poses. We evaluate our approach experimentally on 2D images of basic Bharatanatyam poses.

  15. Spectral and bispectral feature-extraction neural networks for texture classification

    Science.gov (United States)

    Kameyama, Keisuke; Kosugi, Yukio

    1997-10-01

    A neural network model (Kernel Modifying Neural Network: KM Net) specialized for image texture classification, which unifies the filtering kernels for feature extraction and the layered network classifier, will be introduced. The KM Net consists of a layer of convolution kernels that are constrained to be 2D Gabor filters to guarantee efficient spectral feature localization. The KM Net enables an automated feature extraction in multi-channel texture classification through simultaneous modification of the Gabor kernel parameters (central frequency and bandwidth) and the connection weights of the subsequent classifier layers by a backpropagation-based training rule. The capability of the model and its training rule was verified via segmentation of common texture mosaic images. In comparison with the conventional multi-channel filtering method which uses numerous filters to cover the spatial frequency domain, the proposed strategy can greatly reduce the computational cost both in feature extraction and classification. Since the adaptive Gabor filtering scheme is also applicable to band selection in moment spectra of higher orders, the network model was extended for adaptive bispectral filtering for extraction of the phase relation among the frequency components. The ability of this Bispectral KM Net was demonstrated in the discrimination of visually discriminable synthetic textures with identical local power spectral distributions.

  16. Application of Fisher Score and mRMR Techniques for Feature Selection in Compressed Medical Images

    Directory of Open Access Journals (Sweden)

    Vamsidhar Enireddy

    2015-12-01

    Full Text Available In nowadays there is a large increase in the digital medical images and different medical imaging equipments are available for diagnoses, medical professionals are increasingly relying on computer aided techniques for both indexing these images and retrieving similar images from large repositories. To develop systems which are computationally less intensive without compromising on the accuracy from the high dimensional feature space is always challenging. In this paper an investigation is made on the retrieval of compressed medical images. Images are compressed using the visually lossless compression technique. Shape and texture features are extracted and best features are selected using the fisher technique and mRMR. Using these selected features RNN with BPTT was utilized for classification of the compressed images.

  17. Feature detection on 3D images of dental imprints

    Science.gov (United States)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  18. Human action classification using adaptive key frame interval for feature extraction

    Science.gov (United States)

    Lertniphonphan, Kanokphan; Aramvith, Supavadee; Chalidabhongse, Thanarat H.

    2016-01-01

    Human action classification based on the adaptive key frame interval (AKFI) feature extraction is presented. Since human movement periods are different, the action intervals that contain the intensive and compact motion information are considered in this work. We specify AKFI by analyzing an amount of motion through time. The key frame is defined to be the local minimum interframe motion, which is computed by using frame differencing between consecutive frames. Once key frames are detected, the features within a segmented period are encoded by adaptive motion history image and key pose history image. The action representation consists of the local orientation histogram of the features during AKFI. The experimental results on Weizmann dataset, KTH dataset, and UT Interaction dataset demonstrate that the features can effectively classify action and can classify irregular cases of walking compared to other well-known algorithms.

  19. A Novel Feature Extraction for Robust EMG Pattern Recognition

    CERN Document Server

    Phinyomark, Angkoon; Phukpattaranont, Pornchai

    2009-01-01

    Varieties of noises are major problem in recognition of Electromyography (EMG) signal. Hence, methods to remove noise become most significant in EMG signal analysis. White Gaussian noise (WGN) is used to represent interference in this paper. Generally, WGN is difficult to be removed using typical filtering and solutions to remove WGN are limited. In addition, noise removal is an important step before performing feature extraction, which is used in EMG-based recognition. This research is aimed to present a novel feature that tolerate with WGN. As a result, noise removal algorithm is not needed. Two novel mean and median frequencies (MMNF and MMDF) are presented for robust feature extraction. Sixteen existing features and two novelties are evaluated in a noisy environment. WGN with various signal-to-noise ratios (SNRs), i.e. 20-0 dB, was added to the original EMG signal. The results showed that MMNF performed very well especially in weak EMG signal compared with others. The error of MMNF in weak EMG signal with...

  20. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2014-06-01

    Full Text Available User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user’s location (sensitivity and its capacity to detect false positives (specificity in both scenarios.

  1. A HYBRID APPROACH BASED MEDICAL IMAGE RETRIEVAL SYSTEM USING FEATURE OPTIMIZED CLASSIFICATION SIMILARITY FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Yogapriya Jaganathan

    2013-01-01

    Full Text Available For the past few years, massive upgradation is obtained in the pasture of Content Based Medical Image Retrieval (CBMIR for effective utilization of medical images based on visual feature analysis for the purpose of diagnosis and educational research. The existing medical image retrieval systems are still not optimal to solve the feature dimensionality reduction problem which increases the computational complexity and decreases the speed of a retrieval process. The proposed CBMIR is used a hybrid approach based on Feature Extraction, Optimization of Feature Vectors, Classification of Features and Similarity Measurements. This type of CBMIR is called Feature Optimized Classification Similarity (FOCS framework. The selected features are Textures using Gray level Co-occurrence Matrix Features (GLCM and Tamura Features (TF in which extracted features are formed as feature vector database. The Fuzzy based Particle Swarm Optimization (FPSO technique is used to reduce the feature vector dimensionality and classification is performed using Fuzzy based Relevance Vector Machine (FRVM to form groups of relevant image features that provide a natural way to classify dimensionally reduced feature vectors of images. The Euclidean Distance (ED is used as similarity measurement to measure the significance between the query image and the target images. This FOCS approach can get the query from the user and has retrieved the needed images from the databases. The retrieval algorithm performances are estimated in terms of precision and recall. This FOCS framework comprises several benefits when compared to existing CBMIR. GLCM and TF are used to extract texture features and form a feature vector database. Fuzzy-PSO is used to reduce the feature vector dimensionality issues while selecting the important features in the feature vector database in which computational complexity is decreased. Fuzzy based RVM is used for feature classification in which it increases the

  2. Image registration algorithm using Mexican hat function-based operator and grouped feature matching strategy.

    Directory of Open Access Journals (Sweden)

    Feng Jin

    Full Text Available Feature detection and matching are crucial for robust and reliable image registration. Although many methods have been developed, they commonly focus on only one class of image features. The methods that combine two or more classes of features are still novel and significant. In this work, methods for feature detection and matching are proposed. A Mexican hat function-based operator is used for image feature detection, including the local area detection and the feature point detection. For the local area detection, we use the Mexican hat operator for image filtering, and then the zero-crossing points are extracted and merged into the area borders. For the feature point detection, the Mexican hat operator is performed in scale space to get the key points. After the feature detection, an image registration is achieved by using the two classes of image features. The feature points are grouped according to a standardized region that contains correspondence to the local area, precise registration is achieved eventually by the grouped points. An image transformation matrix is estimated by the feature points in a region and then the best one is chosen through competition of a set of the transformation matrices. This strategy has been named the Grouped Sample Consensus (GCS. The GCS has also ability for removing the outliers effectively. The experimental results show that the proposed algorithm has high registration accuracy and small computational volume.

  3. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    2011-01-01

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn favorabl

  4. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  5. FEATURE DIMENSION REDUCTION FOR EFFICIENT MEDICAL IMAGE RETRIEVAL SYSTEM USING UNIFIED FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Yogapriya Jaganathan

    2013-01-01

    Full Text Available Feature dimensionality reduction problem is a major issue in Content Based Medical Image Retrieval (CBMIR for the effective management of medical images with the support of visual features for the purpose of diagnosis and educational research field. However, high dimensional features would be an origin for substantial challenges in retrieval. The proposed CBMIR is used a unified approach based on extraction of visual features, optimized feature selection, classification of optimized features and similarity measurements. However, high dimensional features would be an origin for substantial challenges in retrieval. The Texture features are selected using Gray Level Co-occurrence Matrix (GLCM, Tamura Features (TF and Gabor Filter (GF in which pull out of features are formed a feature vector database. Fuzzy based PSO (FPSO is applied for Feature selection to overcome the difficulty of feature vectors being surrounded in local optima of original PSO. This procedure also integrates a smart policymaking structure of ACO procedure into the novel FPSO where the global optimum position to be exclusive for every feature particle. The Fuzzy based Particle Swarm Optimization and Ant Colony Optimization (FPSO-ACO technique is used to trim down the feature vector dimensionality and classification is accomplished using an extensive Fuzzy based Relevance Vector Machine (FRVM to form collections of relevant image features that would provide an accepted way to classify dimensionally concentrated feature vectors of images. The Euclidean Distance (ED is recognized as finest for similarity measurement between the medical query image and the medical image database. This proposed approach can acquire the query from the user and had retrieved the desired images from the database. The retrieval performance would be assessed based on precision and recall. This proposed CBMIR is used to provide comfort to the physician to obtain more assurance in their decisions for

  6. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

    Science.gov (United States)

    Rajashekararadhya, S. V.; Ranjan, P. Vanaja

    India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

  7. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  8. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    Science.gov (United States)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  9. Fish Recognition Based on Robust Features Extraction from Size and Shape Measurements Using Neural Network

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2010-01-01

    Full Text Available Problem statement: Image recognition is a challenging problem researchers had been research into this area for so long especially in the recent years, due to distortion, noise, segmentation errors, overlap and occlusion of objects in digital images. In our study, there are many fields concern with pattern recognition, for example, fingerprint verification, face recognition, iris discrimination, chromosome shape discrimination, optical character recognition, texture discrimination and speech recognition, the subject of pattern recognition appears. A system for recognizing isolated pattern of interest may be as an approach for dealing with such application. Scientists and engineers with interests in image processing and pattern recognition have developed various approaches to deal with digital image recognition problems such as, neural network, contour matching and statistics. Approach: In this study, our aim was to recognize an isolated pattern of interest in the image based on the combination between robust features extraction. Where depend on size and shape measurements, that were extracted by measuring the distance and geometrical measurements. Results: We presented a system prototype for dealing with such problem. The system started by acquiring an image containing pattern of fish, then the image features extraction is performed relying on size and shape measurements. Our system has been applied on 20 different fish families, each family has a different number of fish types and our sample consists of distinct 350 of fish images. These images were divided into two datasets: 257 training images and 93 testing images. An overall accuracy was obtained using the neural network associated with the back-propagation algorithm was 86% on the test dataset used. Conclusion: We developed a classifier for fish images recognition. We efficiently have chosen a features extraction method to fit our demands. Our classifier successfully design and implement a

  10. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  11. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    Science.gov (United States)

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  12. Extraction of subjective properties in image processing

    OpenAIRE

    2002-01-01

    Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other...

  13. Relationship between Hyperuricemia and Haar-Like Features on Tongue Images

    Directory of Open Access Journals (Sweden)

    Yan Cui

    2015-01-01

    Full Text Available Objective. To investigate differences in tongue images of subjects with and without hyperuricemia. Materials and Methods. This population-based case-control study was performed in 2012-2013. We collected data from 46 case subjects with hyperuricemia and 46 control subjects, including results of biochemical examinations and tongue images. Symmetrical Haar-like features based on integral images were extracted from tongue images. T-tests were performed to determine the ability of extracted features to distinguish between the case and control groups. We first selected features using the common criterion P<0.05, then conducted further examination of feature characteristics and feature selection using means and standard deviations of distributions in the case and control groups. Results. A total of 115,683 features were selected using the criterion P<0.05. The maximum area under the receiver operating characteristic curve (AUC of these features was 0.877. The sensitivity of the feature with the maximum AUC value was 0.800 and specificity was 0.826 when the Youden index was maximized. Features that performed well were concentrated in the tongue root region. Conclusions. Symmetrical Haar-like features enabled discrimination of subjects with and without hyperuricemia in our sample. The locations of these discriminative features were in agreement with the interpretation of tongue appearance in traditional Chinese and Western medicine.

  14. Feature identification for image-guided transcatheter aortic valve implantation

    Science.gov (United States)

    Lang, Pencilla; Rajchl, Martin; McLeod, A. Jonathan; Chu, Michael W.; Peters, Terry M.

    2012-02-01

    Transcatheter aortic valve implantation (TAVI) is a less invasive alternative to open-heart surgery, and is critically dependent on imaging for accurate placement of the new valve. Augmented image-guidance for TAVI can be provided by registering together intra-operative transesophageal echo (TEE) ultrasound and a model derived from pre-operative CT. Automatic contour delineation on TEE images of the aortic root is required for real-time registration. This study develops an algorithm to automatically extract contours on simultaneous cross-plane short-axis and long-axis (XPlane) TEE views, and register these features to a 3D pre-operative model. A continuous max-flow approach is used to segment the aortic root, followed by analysis of curvature to select appropriate contours for use in registration. Results demonstrate a mean contour boundary distance error of 1.3 and 2.8mm for the short and long-axis views respectively, and a mean target registration error of 5.9mm. Real-time image guidance has the potential to increase accuracy and reduce complications in TAVI.

  15. Comparative Study of Triangulation based and Feature based Image Morphing

    Directory of Open Access Journals (Sweden)

    Ms. Bhumika G. Bhatt

    2012-01-01

    Full Text Available Image Morphing is one of the most powerful Digital Image processing technique, which is used to enhancemany multimedia projects, presentations, education and computer based training. It is also used inmedical imaging field to recover features not visible in images by establishing correspondence of featuresamong successive pair of scanned images. This paper discuss what morphing is and implementation ofTriangulation based morphing Technique and Feature based Image Morphing. IT analyze both morphingtechniques in terms of different attributes such as computational complexity, Visual quality of morphobtained and complexity involved in selection of features.

  16. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  17. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-08-09

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Entropy Analysis as an Electroencephalogram Feature Extraction Method

    Directory of Open Access Journals (Sweden)

    P. I. Sotnikov

    2014-01-01

    Full Text Available The aim of this study was to evaluate a possibility for using an entropy analysis as an electroencephalogram (EEG feature extraction method in brain-computer interfaces (BCI. The first section of the article describes the proposed algorithm based on the characteristic features calculation using the Shannon entropy analysis. The second section discusses issues of the classifier development for the EEG records. We use a support vector machine (SVM as a classifier. The third section describes the test data. Further, we estimate an efficiency of the considered feature extraction method to compare it with a number of other methods. These methods include: evaluation of signal variance; estimation of spectral power density (PSD; estimation of autoregression model parameters; signal analysis using the continuous wavelet transform; construction of common spatial pattern (CSP filter. As a measure of efficiency we use the probability value of correctly recognized types of imagery movements. At the last stage we evaluate the impact of EEG signal preprocessing methods on the final classification accuracy. Finally, it concludes that the entropy analysis has good prospects in BCI applications.

  19. Data Clustering Analysis Based on Wavelet Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    QIANYuntao; TANGYuanyan

    2003-01-01

    A novel wavelet-based data clustering method is presented in this paper, which includes wavelet feature extraction and cluster growing algorithm. Wavelet transform can provide rich and diversified information for representing the global and local inherent structures of dataset. therefore, it is a very powerful tool for clustering feature extraction. As an unsupervised classification, the target of clustering analysis is dependent on the specific clustering criteria. Several criteria that should be con-sidered for general-purpose clustering algorithm are pro-posed. And the cluster growing algorithm is also con-structed to connect clustering criteria with wavelet fea-tures. Compared with other popular clustering methods,our clustering approach provides multi-resolution cluster-ing results,needs few prior parameters, correctly deals with irregularly shaped clusters, and is insensitive to noises and outliers. As this wavelet-based clustering method isaimed at solving two-dimensional data clustering prob-lem, for high-dimensional datasets, self-organizing mapand U-matrlx method are applied to transform them intotwo-dimensional Euclidean space, so that high-dimensional data clustering analysis,Results on some sim-ulated data and standard test data are reported to illus-trate the power of our method.

  20. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  1. Improving Identification of Area Targets by Integrated Analysis of Hyperspectral Data and Extracted Texture Features

    Science.gov (United States)

    2012-09-01

    Imaging Spectrometer B Blue CA California FWHM Full Width Half Max G Green GIS Geographic Information System GLCM Gray Level Co-occurrence... GLCM ). From this GLCM the quantities known as texture features are extracted. The textures studied in his landmark paper were: angular second...defines the number of surrounding pixels that are used to create the GLCM . A 3x3 window would only include the 8 pixels immediately adjacent to the

  2. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  3. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  4. A Novel Feature Cloud Visualization for Depiction of Product Features Extracted from Customer Reviews

    Directory of Open Access Journals (Sweden)

    Tanvir Ahmad

    2013-09-01

    Full Text Available There has been an exponential growth of web content on the World Wide Web and online users contributing to majority of the unstructured data which also contain a good amount of information on many different subjects that may range from products, news, programmes and services. Many a times other users read these reviews and try to find the meaning of the sentences expressed by the reviewers. Since the number and the length of the reviews are so large that most the times the user will read a few reviews and would like to take an informed decision on the subject that is being talked about. Many different methods have been adopted by websites like numerical rating, star rating, percentage rating etc. However, these methods fail to give information on the explicit features of the product and their overall weight when taking the product in totality. In this paper, a framework has been presented which first calculates the weight of the features depending on the user satisfaction or dissatisfaction expressed on individual features and further a feature cloud visualization has been proposed which uses two level of specificity where the first level lists the extracted features and the second level shows the opinions on those features. A font generation function has been applied which calculates the font size depending on the importance of the features vis-a-vis with the opinion expressed on them.

  5. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  6. Features extraction from the electrocatalytic gas sensor responses

    Science.gov (United States)

    Kalinowski, Paweł; Woźniak, Łukasz; Stachowiak, Maria; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    One of the types of gas sensors used for detection and identification of toxic-air pollutant is an electro-catalytic gas sensor. The electro-catalytic sensors are working in cyclic voltammetry mode, enable detection of various gases. Their response are in the form of I-V curves which contain information about the type and the concentration of measured volatile compound. However, additional analysis is required to provide the efficient recognition of the target gas. Multivariate data analysis and pattern recognition methods are proven to be useful tool for such application, but further investigations on the improvement of the sensor's responses processing are required. In this article the method for extraction of the parameters from the electro-catalytic sensor responses is presented. Extracted features enable the significant reduction of data dimension without the loss of the efficiency of recognition of four volatile air-pollutant, namely nitrogen dioxide, ammonia, hydrogen sulfide and sulfur dioxide.

  7. An Advanced Approach to Extraction of Colour Texture Features Based on GLCM

    Directory of Open Access Journals (Sweden)

    Miroslav Benco

    2014-07-01

    Full Text Available This paper discusses research in the area of texture image classification. More specifically, the combination of texture and colour features is researched. The principle objective is to create a robust descriptor for the extraction of colour texture features. The principles of two well-known methods for grey- level texture feature extraction, namely GLCM (grey- level co-occurrence matrix and Gabor filters, are used in experiments. For the texture classification, the support vector machine is used. In the first approach, the methods are applied in separate channels in the colour image. The experimental results show the huge growth of precision for colour texture retrieval by GLCM. Therefore, the GLCM is modified for extracting probability matrices directly from the colour image. The method for 13 directions neighbourhood system is proposed and formulas for probability matrices computation are presented. The proposed method is called CLCM (colour-level co-occurrence matrices and experimental results show that it is a powerful method for colour texture classification.

  8. Dermoscopic diagnosis of melanoma in a 4D space constructed by active contour extracted features.

    Science.gov (United States)

    Mete, Mutlu; Sirakov, Nikolay Metodiev

    2012-10-01

    Dermoscopy, also known as epiluminescence microscopy, is a major imaging technique used in the assessment of melanoma and other diseases of skin. In this study we propose a computer aided method and tools for fast and automated diagnosis of malignant skin lesions using non-linear classifiers. The method consists of three main stages: (1) skin lesion features extraction from images; (2) features measurement and digitization; and (3) skin lesion binary diagnosis (classification), using the extracted features. A shrinking active contour (S-ACES) extracts color regions boundaries, the number of colors, and lesion's boundary, which is used to calculate the abrupt boundary. Quantification methods for measurements of asymmetry and abrupt endings in skin lesions are elaborated to approach the second stage of the method. The total dermoscopy score (TDS) formula of the ABCD rule is modeled as linear support vector machines (SVM). Further a polynomial SVM classifier is developed. To validate the proposed framework a dataset of 64 lesion images were selected from a collection with a ground truth. The lesions were classified as benign or malignant by the TDS based model and the SVM polynomial classifier. Comparing the results, we showed that the latter model has a better f-measure then the TDS-based model (linear classifier) in the classification of skin lesions into two groups, malignant and benign. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction