WorldWideScience

Sample records for image feature extraction

  1. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  2. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  3. Automatic Feature Extraction from Planetary Images

    Science.gov (United States)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  4. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  5. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    Several fingerprint matching algorithms have been developed for minutiae or template matching of fingerprint templates. The efficiency of these fingerprint matching algorithms depends on the success of the image processing and features extraction steps employed. Fingerprint image processing and analysis is hence an ...

  6. Local distortion resistant image watermarking relying on salient feature extraction

    Science.gov (United States)

    Nikolaidis, Athanasios

    2012-12-01

    The purpose of this article is to present a novel method for region based image watermarking that can tolerate local image distortions to a substantially greater extent than existing methods. The first stage of the method relies on computing a normalized version of the original image using image moments. The next step is to extract a set of feature points that will act as centers of the watermark embedding areas. Four different existing feature extraction techniques are tested: Radial Symmetry Transform (RST), scale-invariant feature transform (SIFT), speeded up robust features (SURF) and features from accelerated segment test (FAST). Instead of embedding the watermark in the DCT domain of the normalized image, we follow the equivalent procedure of first performing the inverse DCT of the original watermark, inversely normalizing it and finally embedding it in the original image. This is done in order to minimize image distortion imposed by inversely normalizing the normalized image to obtain the original. The detection process consists of normalizing the input image and extracting the feature points of the normalized image, after which a correlation detector is employed to detect the possibly inserted watermark in the normalized image. Experimental results demonstrate the relative performance of the four different feature extraction techniques under both geometrical and signal processing operations, as well as the overall superiority of the method against two state-of-the-art techniques that are quite robust as far as local image distortions are concerned.

  7. Retinal image analysis: preprocessing and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Marrugo, Andres G; Millan, Maria S, E-mail: andres.marrugo@upc.edu [Grup d' Optica Aplicada i Processament d' Imatge, Departament d' Optica i Optometria Univesitat Politecnica de Catalunya (Spain)

    2011-01-01

    Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

  8. Image feature extraction using Gabor-like transform

    Science.gov (United States)

    Finegan, Michael K., Jr.; Wee, William G.

    1991-01-01

    Noisy and highly textured images were operated on with a Gabor-like transform. The results were evaluated to see if useful features could be extracted using spatio-temporal operators. The use of spatio-temporal operators allows for extraction of features containing simultaneous frequency and orientation information. This method allows important features, both specific and generic, to be extracted from images. The transformation was applied to industrial inspection imagery, in particular, a NASA space shuttle main engine (SSME) system for offline health monitoring. Preliminary results are given and discussed. Edge features were extracted from one of the test images. Because of the highly textured surface (even after scan line smoothing and median filtering), the Laplacian edge operator yields many spurious edges.

  9. THE IDENTIFICATION OF PILL USING FEATURE EXTRACTION IN IMAGE MINING

    Directory of Open Access Journals (Sweden)

    A. Hema

    2015-02-01

    Full Text Available With the help of image mining techniques, an automatic pill identification system was investigated in this study for matching the images of the pills based on its several features like imprint, color, size and shape. Image mining is an inter-disciplinary task requiring expertise from various fields such as computer vision, image retrieval, image matching and pattern recognition. Image mining is the method in which the unusual patterns are detected so that both hidden and useful data images can only be stored in large database. It involves two different approaches for image matching. This research presents a drug identification, registration, detection and matching, Text, color and shape extraction of the image with image mining concept to identify the legal and illegal pills with more accuracy. Initially, the preprocessing process is carried out using novel interpolation algorithm. The main aim of this interpolation algorithm is to reduce the artifacts, blurring and jagged edges introduced during up-sampling. Then the registration process is proposed with two modules they are, feature extraction and corner detection. In feature extraction the noisy high frequency edges are discarded and relevant high frequency edges are selected. The corner detection approach detects the high frequency pixels in the intersection points. Through the overall performance gets improved. There is a need of segregate the dataset into groups based on the query image’s size, shape, color, text, etc. That process of segregating required information is called as feature extraction. The feature extraction is done using Geometrical Gradient feature transformation. Finally, color and shape feature extraction were performed using color histogram and geometrical gradient vector. Simulation results shows that the proposed techniques provide accurate retrieval results both in terms of time and accuracy when compared to conventional approaches.

  10. Discriminative data transform for image feature extraction and classification.

    Science.gov (United States)

    Song, Yang; Cai, Weidong; Huh, Seungil; Chen, Mei; Kanade, Takeo; Zhou, Yun; Feng, Dagan

    2013-01-01

    Good feature design is important to achieve effective image classification. This paper presents a novel feature design with two main contributions. First, prior to computing the feature descriptors, we propose to transform the images with learning-based filters to obtain more representative feature descriptors. Second, we propose to transform the computed descriptors with another set of learning-based filters to further improve the classification accuracy. In this way, while generic feature descriptors are used, data-adaptive information is integrated into the feature extraction process based on the optimization objective to enhance the discriminative power of feature descriptors. The feature design is applicable to different application domains, and is evaluated on both lung tissue classification in high-resolution computed tomography (HRCT) images and apoptosis detection in time-lapse phase contrast microscopy image sequences. Both experiments show promising performance improvements over the state-of-the-art.

  11. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  12. Automated feature extraction and classification from image sources

    Science.gov (United States)

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  13. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  14. Feature extraction of hyperspectral images using wavelet and matching pursuit

    Science.gov (United States)

    Hsu, Pai-Hui

    Since hyperspectral images contain rich and fine spectral information, an improvement of land use/cover classification accuracy is highly expected from the utilization of such images. However, the traditional statistics-based classification methods which have been successfully applied to multispectral data in the past are not as effective as to hyperspectral data. One major reason is that the number of spectral bands is too large relative to the number of training samples. This problem is caused by curse of dimensionality, which refers to the fact that the sample size required for training a specific classifier grows exponentially with the number of spectral bands. A simple but sometimes very effective way to overcome this problem is to reduce the dimensionality of hyperspectral images. This can be done by feature extraction that a small number of salient features are extracted from the hyperspectral data when confronted with a limited size of training samples. In this paper, a new feature extraction method based on the matching pursuit (MP) is proposed to extract useful features for the classification of hyperspectral images. The matching pursuit algorithm uses a greedy strategy to find an adaptive and optimal representation of the hyperspectral data iteratively from a highly redundant wavelet packets dictionary. An AVIRIS data set was tested to illustrate the classification performance after matching pursuit method was applied. In addition, some existing feature extraction methods based on the wavelet transform are also compared with the matching pursuit method in terms of the classification accuracies. The experiment results showed that the wavelet and matching pursuit method exactly provide an effective tool for feature extraction. The classification problem caused by curse of dimensionality can be avoided by matching pursuit and wavelet-based dimensionality reduction.

  15. An image-processing methodology for extracting bloodstain pattern features.

    Science.gov (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Shape adaptive, robust iris feature extraction from noisy iris images.

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  17. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    International Nuclear Information System (INIS)

    Quanqing, Zhu.; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-01-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images

  18. 3D space positioning and image feature extraction for workpiece

    Science.gov (United States)

    Ye, Bing; Hu, Yi

    2008-03-01

    An optical system of 3D parameters measurement for specific area of a workpiece has been presented and discussed in this paper. A number of the CCD image sensors are employed to construct the 3D coordinate system for the measured area. The CCD image sensor of the monitoring target is used to lock the measured workpiece when it enters the field of view. The other sensors, which are placed symmetrically beam scanners, measure the appearance of the workpiece and the characteristic parameters. The paper established target image segmentation and the image feature extraction algorithm to lock the target, based on the geometric similarity of objective characteristics, rapid locking the goal can be realized. When line laser beam scan the tested workpiece, a number of images are extracted equal time interval and the overlapping images are processed to complete image reconstruction, and achieve the 3D image information. From the 3D coordinate reconstruction model, the 3D characteristic parameters of the tested workpiece are gained. The experimental results are provided in the paper.

  19. A comparative study of image low level feature extraction algorithms

    Directory of Open Access Journals (Sweden)

    M.M. El-gayar

    2013-07-01

    Full Text Available Feature extraction and matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods for assessing the performance of popular image matching algorithms are presented and rely on costly descriptors for detection and matching. Specifically, the method assesses the type of images under which each of the algorithms reviewed herein perform to its maximum or highest efficiency. The efficiency is measured in terms of the number of matches founds by the algorithm and the number of type I and type II errors encountered when the algorithm is tested against a specific pair of images. Current comparative studies asses the performance of the algorithms based on the results obtained in different criteria such as speed, sensitivity, occlusion, and others. This study addresses the limitations of the existing comparative tools and delivers a generalized criterion to determine beforehand the level of efficiency expected from a matching algorithm given the type of images evaluated. The algorithms and the respective images used within this work are divided into two groups: feature-based and texture-based. And from this broad classification only three of the most widely used algorithms are assessed: color histogram, FAST (Features from Accelerated Segment Test, SIFT (Scale Invariant Feature Transform, PCA-SIFT (Principal Component Analysis-SIFT, F-SIFT (fast-SIFT and SURF (speeded up robust features. The performance of the Fast-SIFT (F-SIFT feature detection methods are compared for scale changes, rotation, blur, illumination changes and affine transformations. All the experiments use repeatability measurement and the number of correct matches for the evaluation measurements. SIFT presents its stability in most situations although its slow. F-SIFT is the fastest one with good performance as the same as SURF, SIFT, PCA-SIFT show its advantages in rotation and illumination changes.

  20. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  1. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  2. Automatic archaeological feature extraction from satellite VHR images

    Science.gov (United States)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  3. Line drawing extraction from gray level images by feature integration

    Science.gov (United States)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  4. Radar Image Enhancement, Feature Extraction and Motion Compensation Using Joint Time-Frequency Techniques

    National Research Council Canada - National Science Library

    Ling, Hao

    2001-01-01

    This report summarizes the scientific progress on the research grant "Radar image Enhancement, Feature Extraction, and Motion Compensation Using Joint Time-Frequency Techniques" during the period 15...

  5. Radar Image Enhancement, Feature Extraction and Motion Compensation Using Joint Time-Frequency Techniques

    National Research Council Canada - National Science Library

    Ling, Hao

    1999-01-01

    This Report summarizes the scientific progress on the research grant "Radar Image Enhancement, Feature Extraction, and Motion Compensation Using Joint Time-Frequency Techniques" during the period 15...

  6. Spatial and spatio-temporal feature extraction from 4D echocardiography images.

    Science.gov (United States)

    Awan, Ruqayya; Rajpoot, Kashif

    2015-09-01

    Ultrasound images are difficult to segment because of their noisy and low contrast nature which makes it challenging to extract the important features. Typical intensity-gradient based approaches are not suitable for these low contrast images while it has been shown that the local phase based technique provides better results than intensity based methods for ultrasound images. The spatial feature extraction methods ignore the continuity in the heart cycle and may also capture spurious features. It is believed that the spurious features (noise) that are not consistent along the frames can be excluded by considering the temporal information. In this paper, we present a local phase based 4D (3D+time) feature asymmetry (FA) measure using the monogenic signal. We have investigated the spatio-temporal feature extraction to explore the effect of adding time information in the feature extraction process. To evaluate the impact of time dimension, the results of 4D based feature extraction are compared with the results of 3D based feature extraction which shows the favorable 4D feature extraction results when temporal resolution is good. The paper compares the band-pass filters (difference of Gaussian, Cauchy and Gaussian derivative) in terms of their feature extraction performance. Moreover, the feature extraction is further evaluated quantitatively by left ventricle segmentation using the extracted features. The results demonstrate that the spatio-temporal feature extraction is promising in frames with good temporal resolution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Enhancement and feature extraction of RS images from seismic area and seismic disaster recognition technologies

    Science.gov (United States)

    Zhang, Jingfa; Qin, Qiming

    2003-09-01

    Many types of feature extracting of RS image are analyzed, and the work procedure of pattern recognizing in RS images of seismic disaster is proposed. The aerial RS image of Tangshan Great Earthquake is processed, and the digital features of various typical seismic disaster on the RS image is calculated.

  8. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Yin Fei

    2017-01-01

    Full Text Available As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM and energy information map (EM as well as structure and energy map (SEM to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

  9. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  10. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  11. Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images

    Science.gov (United States)

    Eken, S.; Aydın, E.; Sayar, A.

    2017-11-01

    In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  12. DIFET: DISTRIBUTED FEATURE EXTRACTION TOOL FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    S. Eken

    2017-11-01

    Full Text Available In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  13. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    Science.gov (United States)

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  14. A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.

  15. Uniform competency-based local feature extraction for remote sensing images

    Science.gov (United States)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  16. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  17. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    Directory of Open Access Journals (Sweden)

    Mei Yu

    2012-01-01

    Full Text Available The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT scans. The algorithm includes mainly two processes: (1 distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2 representation using bag of visual words (BoW based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%.

  18. Iris image enhancement for feature recognition and extraction

    CSIR Research Space (South Africa)

    Mabuza, GP

    2012-10-01

    Full Text Available Gonzalez, R.C. and Woods, R.E. 2002. Digital Image Processing 2nd Edition, Instructor?s manual .Englewood Cliffs, Prentice Hall, pp 17-36. Proen?a, H. and Alexandre, L.A. 2007. Toward Noncooperative Iris Recognition: A classification approach using... multiple signatures. IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE Computer Society, 29 (4): 607-611. Sazonova, N. and Schuckers, S. 2011. Fast and efficient iris image enhancement using logarithmic image processing. Biometric...

  19. Global image feature extraction using slope pattern spectra

    CSIR Research Space (South Africa)

    Toudjeu, IT

    2008-06-01

    Full Text Available French South African Technical Institute in Electronics, Tshwane University of Technology. Pretoria, South Africa 2 Remote Sensing Research Group, Meraka Institute, CSIR, Pretoria, South Africa, 0001 itchangou@gmail.com, vanwykb@tut.ac.za, mavanwyk....1 Steel Image Regression Images of High Steel Low Alloy sample were used in this application. The steel samples were prepared as part of a study by the department of Chemical and Metallurgical Engineering at the Tshwane University of Technology...

  20. Image Analysis of Soil Micromorphology: Feature Extraction, Segmentation, and Quality Inference

    Directory of Open Access Journals (Sweden)

    Petros Maragos

    2004-06-01

    Full Text Available We present an automated system that we have developed for estimation of the bioecological quality of soils using various image analysis methodologies. Its goal is to analyze soilsection images, extract features related to their micromorphology, and relate the visual features to various degrees of soil fertility inferred from biochemical characteristics of the soil. The image methodologies used range from low-level image processing tasks, such as nonlinear enhancement, multiscale analysis, geometric feature detection, and size distributions, to object-oriented analysis, such as segmentation, region texture, and shape analysis.

  1. Feature Extraction

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  2. Cartographic feature extraction with integrated SIR-B and Landsat TM images

    Science.gov (United States)

    Welch, R.; Ehlers, Manfred

    1988-01-01

    A digital cartographic multisensor image database of excellent geometry and improved resolution was created by registering SIR-B images to a rectified Landsat TM reference image and applying intensity-hue-saturation enhancement techniques. When evaluated against geodetic control, RMSE(XY) values of approximately + or - 20 m were noted for the composite SIR-B/TM images. The completeness of cartographic features extracted from the composite images exceeded those obtained from separate SIR-B and TM image data sets by approximately 10 and 25 percent, respectively, indicating that the composite images may prove suitable for planimetric mapping at a scale of 1:100,000 or smaller. At present, the most effective method for extracting cartographic information involves digitizing features directly from the image processing display screen.

  3. Three-Dimensional Spatial-Spectral Filtering Based Feature Extraction for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    AKYUREK, H. A.

    2017-05-01

    Full Text Available Hyperspectral pixels which have high spectral resolution are used to predict decomposition of material types on area of obtained image. Due to its multidimensional form, hyperspectral image classification is a challenging task. Hyperspectral images are also affected by radiometric noise. In order to improve the classification accuracy, many researchers are focusing on the improvement of filtering, feature extraction and classification methods. In the context of hyperspectral image classification, spatial information is as important as spectral information. In this study, a three-dimensional spatial-spectral filtering based feature extraction method is presented. It consists of three main steps. The first is a pre-processing step which include spatial-spectral information filtering in three-dimensional space. The second comprises extract functional features of filtered data. The last one is combining extracted features by serial feature fusion strategy and using to classify hyperspectral image pixels. Experiments were conducted on two popular public hyperspectral remote sensing image, 1%, 5%, 10% and 15% of samples of each classes used as training set, the remaining is used as test set. The proposed method compared with well-known methods. Experimental results show that the proposed method achieved outstanding performance than compared methods in hyperspectral image classification task.

  4. Infrared and visual image fusion through infrared feature extraction and visual information preservation

    Science.gov (United States)

    Zhang, Yu; Zhang, Lijia; Bai, Xiangzhi; Zhang, Li

    2017-06-01

    The ideal fusion of the infrared image and visual image should integrate the important bright features of the infrared image, and preserve much original visual information of the visual image. To achieve this purpose, we propose a simple, fast yet effective infrared and visual image fusion algorithm through infrared feature extraction and visual information preservation. Firstly, we take advantage of quadtree decomposition and B e ´ zier interpolation to reconstruct the infrared background. Secondly, the infrared bright features are extracted by subtracting the reconstructed background from the infrared image and then refined by reducing the redundant background information. To inhibit the over-exposure problem, the refined infrared features are adaptively suppressed and then added on the visual image to achieve the final fusion image. In this way, the fusion image could not only reveal the invisible but important infrared objects by integrating the infrared bright features, but also show good visual quality by preserving much original visual information. Experiments performed on the commonly used image sets validate that the proposed algorithm outperforms several representative image fusion algorithms in most of the cases.

  5. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  6. Thermal feature extraction of servers in a datacenter using thermal image registration

    Science.gov (United States)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  7. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    Science.gov (United States)

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  8. A new method to extract stable feature points based on self-generated simulation images

    Science.gov (United States)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  9. The Use of Features Extracted from Noisy Samples for Image Restoration Purposes

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available An important feature of neural networks is the ability they have to learn from their environment, and, through learning to improve performance in some sense. In the following we restrict the development to the problem of feature extracting unsupervised neural networks derived on the base of the biologically motivated Hebbian self-organizing principle which is conjectured to govern the natural neural assemblies and the classical principal component analysis (PCA method used by statisticians for almost a century for multivariate data analysis and feature extraction. The research work reported in the paper aims to propose a new image reconstruction method based on the features extracted from the noise given by the principal components of the noise covariance matrix.

  10. Annual Report on Radar Image Enhancement, Feature Extraction and Motion Compensation Using Joint Time-Frequency Techniques

    National Research Council Canada - National Science Library

    Hao, Ling

    2000-01-01

    This report summarizes the scientific progress on the research grant "Radar Image Enhancement, Feature Extraction, and Motion Compensation Using Joint Time-Frequency Techniques" during the period 15...

  11. Lumbar Ultrasound Image Feature Extraction and Classification with Support Vector Machine.

    Science.gov (United States)

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2015-10-01

    Needle entry site localization remains a challenge for procedures that involve lumbar puncture, for example, epidural anesthesia. To solve the problem, we have developed an image classification algorithm that can automatically identify the bone/interspinous region for ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. The proposed algorithm consists of feature extraction, feature selection and machine learning procedures. A set of features, including matching values, positions and the appearance of black pixels within pre-defined windows along the midline, were extracted from the ultrasound images using template matching and midline detection methods. A support vector machine was then used to classify the bone images and interspinous images. The support vector machine model was trained with 1,040 images from 26 pregnant subjects and tested on 800 images from a separate set of 20 pregnant patients. A success rate of 95.0% on training set and 93.2% on test set was achieved with the proposed method. The trained support vector machine model was further tested on 46 off-line collected videos, and successfully identified the proper needle insertion site (interspinous region) in 45 of the cases. Therefore, the proposed method is able to process the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work of identifying the needle entry site. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Feature Extraction and Classification on Esophageal X-Ray Images of Xinjiang Kazak Nationality

    Directory of Open Access Journals (Sweden)

    Fang Yang

    2017-01-01

    Full Text Available Esophageal cancer is one of the fastest rising types of cancers in China. The Kazak nationality is the highest-risk group in Xinjiang. In this work, an effective computer-aided diagnostic system is developed to assist physicians in interpreting digital X-ray image features and improving the quality of diagnosis. The modules of the proposed system include image preprocessing, feature extraction, feature selection, image classification, and performance evaluation. 300 original esophageal X-ray images were resized to a region of interest and then enhanced by the median filter and histogram equalization method. 37 features from textural, frequency, and complexity domains were extracted. Both sequential forward selection and principal component analysis methods were employed to select the discriminative features for classification. Then, support vector machine and K-nearest neighbors were applied to classify the esophageal cancer images with respect to their specific types. The classification performance was evaluated in terms of the area under the receiver operating characteristic curve, accuracy, precision, and recall, respectively. Experimental results show that the classification performance of the proposed system outperforms the conventional visual inspection approaches in terms of diagnostic quality and processing time. Therefore, the proposed computer-aided diagnostic system is promising for the diagnostics of esophageal cancer.

  13. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  14. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  15. An unsupervised feature extraction method for high dimensional image data compaction

    Science.gov (United States)

    Ghassemian, Hassan; Landgrebe, David

    1987-01-01

    A new on-line unsupervised feature extraction method for high-dimensional remotely sensed image data compaction is presented. This method can be utilized to solve the problem of data redundancy in scene representation by satellite-borne high resolution multispectral sensors. The algorithm first partitions the observation space into an exhaustive set of disjoint objects. Then, pixels that belong to an object are characterized by an object feature. Finally, the set of object features is used for data transmission and classification. The example results show that the performance with the compacted features provides a slight improvement in classification accuracy instead of any degradation. Also, the information extraction method does not need to be preceded by a data decompaction.

  16. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  17. Feature extraction and pattern classification of colorectal polyps in colonoscopic imaging.

    Science.gov (United States)

    Fu, Jachih J C; Yu, Ya-Wen; Lin, Hong-Mau; Chai, Jyh-Wen; Chen, Clayton Chi-Chang

    2014-06-01

    A computer-aided diagnostic system for colonoscopic imaging has been developed to classify colorectal polyps by type. The modules of the proposed system include image enhancement, feature extraction, feature selection and polyp classification. Three hundred sixty-five images (214 with hyperplastic polyps and 151 with adenomatous polyps) were collected from a branch of a medical center in central Taiwan. The raw images were enhanced by the principal component transform (PCT). The features of texture analysis, spatial domain and spectral domain were extracted from the first component of the PCT. Sequential forward selection (SFS) and sequential floating forward selection (SFFS) were used to select the input feature vectors for classification. Support vector machines (SVMs) were employed to classify the colorectal polyps by type. The classification performance was measured by the Az values of the Receiver Operating Characteristic curve. For all 180 features used as input vectors, the test data set yielded Az values of 88.7%. The Az value was increased by 2.6% (from 88.7% to 91.3%) and 4.4% (from 88.7% to 93.1%) for the features selected by the SFS and the SFFS, respectively. The SFS and the SFFS reduced the dimension of the input vector by 57.2% and 73.8%, respectively. The SFFS outperformed the SFS in both the reduction of the dimension of the feature vector and the classification performance. When the colonoscopic images were visually inspected by experienced physicians, the accuracy of detecting polyps by types was around 85%. The accuracy of the SFFS with the SVM classifier reached 96%. The classification performance of the proposed system outperformed the conventional visual inspection approach. Therefore, the proposed computer-aided system could be used to improve the quality of colorectal polyp diagnosis. Copyright © 2014. Published by Elsevier Ltd.

  18. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  19. Retinal status analysis method based on feature extraction and quantitative grading in OCT images.

    Science.gov (United States)

    Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri

    2016-07-22

    Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.

  20. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images.

    Science.gov (United States)

    Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra

    2017-05-01

    Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.

  1. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  2. Study on Feature Extraction Methods for Character Recognition of Balinese Script on Palm Leaf Manuscript Images

    OpenAIRE

    Kesiman, Made Windu Antara; Prum, Sophea; Burie, Jean-Christophe; Ogier, Jean-Marc

    2016-01-01

    International audience; The complexity of Balinese script and the poor quality of palm leaf manuscripts provide a new challenge for testing and evaluation of robustness of feature extraction methods for character recognition. With the aim of finding the combination of feature extraction methods for character recognition of Balinese script, we present, in this paper, our experimental study on feature extraction methods for character recognition on palm leaf manuscripts. We investigated and eva...

  3. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  4. A method of extracting feature points for a gastric x-ray image filled with barium

    International Nuclear Information System (INIS)

    Nakamura, Shizuo

    1980-01-01

    Gastric form as well as the fringe density provides important information for the diagnosis of filled stomachs. For the m portions near each point on the contour line, the curvatures are obtained, so that the form feature point is determined from the maximum curvature point. In the application of the method to the contour lines extracted from an upright, front, filled image, the nature of the parameter m depending on objects was examined. By choosing m as a few percent of the total number of data composing filled gastric contour lines and extracting the maximum curvature points of large neighboring size, the feature points required for gastric form recognition can be extracted, and also the gastric angle position and its neighborhood important for diagnosis are detected. While the portion with hardened fringes due to morbid state appears as straight, by taking the difference in the curvature at each point and detecting the portions with very small values, the straigh portion of the curve can be detected. (J.P.N.)

  5. Concordance of computer-extracted image features with BI-RADS descriptors for mammographic mass margin

    Science.gov (United States)

    Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Paramagul, Chintana; Nees, Alexis; Helvie, Mark; Shi, Jiazheng

    2008-03-01

    The purpose of this study was to develop and evaluate computer-extracted features for characterizing mammographic mass margins according to BI-RADS spiculated and circumscribed categories. The mass was automatically segmented using an active contour model. A spiculation measure for a pixel on the mass boundary was defined by using the angular difference between the image gradient vector and the normal to the mass, averaged over pixels in a spiculation search region. For the circumscribed margin feature, the angular difference between the principal eigenvector of the Hessian matrix and the normal to the mass was estimated in a band of pixels centered at each point on the boundary, and the feature was extracted from the resulting profile along the boundary. Three MQSA radiologists provided BI-RADS margin ratings for a data set of 198 regions of interest containing breast masses. The features were evaluated with respect to the individual radiologists' characterization using receiver operating characteristic (ROC) analysis, as well as with respect to that from the majority rule, in which a mass was labeled as spiculated (circumscribed) if it was characterized as such by 2 or 3 radiologists, and non-spiculated (non-circumscribed) otherwise. We also investigated the performance of the features for consensus masses, defined as those labeled as spiculated (circumscribed) or nonspiculated (non-circumscribed) by all three radiologists. When masses were labeled according to radiologists R1, R2, and R3 individually, the spiculation feature had an area A z under the ROC curve of 0.90+/-0.04, 0.90+/-0.03, 0.88+/-0.03, respectively, while the circumscribed margin feature had an A z value of 0.77+/-0.04, 0.74+/-0.04, and 0.80+/-0.03, respectively. When masses were labeled according to the majority rule, the A z values for the spiculation and the circumscribed margin features were 0.92+/-0.03 and 0.80+/-+/-0.03, respectively. When only the consensus masses were considered, the A z

  6. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Topouzelis

    2008-10-01

    Full Text Available This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  7. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms.

    Science.gov (United States)

    Topouzelis, Konstantinos N

    2008-10-23

    This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  9. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  10. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  11. Feature Extraction of Weld Defectology in Digital Image of Radiographic Film Using Geometric Invariant Moment and Statistical Texture

    International Nuclear Information System (INIS)

    Muhtadan

    2009-01-01

    The purpose of this research is to perform feature extraction in weld defect of digital image of radiographic film using geometric invariant moment and statistical texture method. Feature extraction values can be use as values that used to classify and pattern recognition on interpretation of weld defect in digital image of radiographic film by computer automatically. Weld defectology type that used in this research are longitudinal crack, transversal crack, distributed porosity, clustered porosity, wormhole, and no defect. Research methodology on this research are program development to read digital image, then performing image cropping to localize weld position, and then applying geometric invariant moment and statistical texture formulas to find feature values. The result of this research are feature extraction values that have tested with RST (rotation, scale, transformation) treatment and yield moment values that more invariant there are ϕ 3 , ϕ 4 , ϕ 5 from geometric invariant moment method. Feature values from statistical texture that are average intensity, average contrast, smoothness, 3 rd moment, uniformity, and entropy, they used as feature extraction values. (author)

  12. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manua...

  13. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  14. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  15. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    Science.gov (United States)

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  16. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    Science.gov (United States)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  17. Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis.

    Science.gov (United States)

    Chen, Yunhua; Liu, Weijian; Zhang, Ling; Yan, Mingyu; Zeng, Yanjun

    2015-09-01

    Due to an absence of reliable biochemical markers, the diagnosis of chronic fatigue syndrome (CFS) mainly relies on the clinical symptoms, and the experience and skill of the doctors currently. To improve objectivity and reduce work intensity, a hybrid facial feature is proposed. First, several kinds of appearance features are identified in different facial regions according to clinical observations of traditional Chinese medicine experts, including vertical striped wrinkles on the forehead, puffiness of the lower eyelid, the skin colour of the cheeks, nose and lips, and the shape of the mouth corner. Afterwards, such features are extracted and systematically combined to form a hybrid feature. We divide the face into several regions based on twelve active appearance model (AAM) feature points, and ten straight lines across them. Then, Gabor wavelet filtering, CIELab color components, threshold-based segmentation and curve fitting are applied to extract features, and Gabor features are reduced by a manifold preserving projection method. Finally, an AdaBoost based score level fusion of multi-modal features is performed after classification of each feature. Despite that the subjects involved in this trial are exclusively Chinese, the method achieves an average accuracy of 89.04% on the training set and 88.32% on the testing set based on the K-fold cross-validation. In addition, the method also possesses desirable sensitivity and specificity on CFS prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Feature extraction, analysis, and 3D visualization of local lung regions in volumetric CT images

    Science.gov (United States)

    Delegacz, Andrzej; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    2001-05-01

    The purpose of the work was to develop image functions for volumetric segmentation, feature extraction, and enhanced 3D visualization of local regions using CT datasets of human lungs. The system is aimed to assist the radiologist in the analysis of lung nodules. Volumetric datasets consisting of 30-50 thoracic helical low-dose CT slices were used in the study. The 3D topological characteristics of local structures including bronchi, blood vessels, and nodules were computed and evaluated. When a location of a region of interest is identified, the computer would automatically compute size, surface of the area, and normalized shape index of the suspected lesion. The developed system can also allow the user to perform interactive operation for evaluation of lung regions and structures through a user- friendly interface. These functions provide the user with a powerful tool to observe and investigate clinically interesting regions through unconventional radiographic viewings and analyses. The developed functions can also be used to view and analyze patient's lung abnormalities in surgical planning applications. Additionally, we see the possibility of using the system as a teaching tool for correlating anatomy of lungs.

  19. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture

    Directory of Open Access Journals (Sweden)

    Steren Chabert

    2017-01-01

    to predict by themselves the risk of rupture. Therefore, our hypothesis is that the risk of rupture lies on the combination of multiple actors. These actors together would play different roles that could be: weakening of the artery wall, increasing biomechanical stresses on the wall induced by blood flow, in addition to personal sensitivity due to family history, or personal history of comorbidity, or even seasonal variations that could gate different inflammation mechanisms. The main goal of this project is to identify relevant variables that may help in the process of predicting the risk of intracranial aneurysm rupture using machine learning and image processing techniques based on structured and non-structured data from multiple sources. We believe that the identification and the combined use of relevant variables extracted from clinical, demographical, environmental and medical imaging data sources will improve the estimation of the aneurysm rupture risk, with respect to the actual practiced method based essentially on the aneurysm size. The methodology of this work consist of four phases: (1 Data collection and storage, (2 feature extraction from multiple sources in particular from angiographic images, (3 development of the model that could describe the risk of aneurysm rupture based on the fusion and combination of the features, and (4 Identification of relevant variables related to the aneurysm rupture process. This study corresponds to an analytic transversal study with prospective and retrospective characteristics. This work will be based on publicly available health statistics data, data of weather conditions, together with clinical and demographic data of patients diagnosed with intracranial aneurysm in the Hospital Carlos van Buren. As main results of this project we are expecting to identify relevant variables extracted from images and other sources that could play a role in the risk of aneurysm rupture. The proposed model will be presented to the

  20. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...

  1. Feature Point Extraction from the Local Frequency Map of an Image

    Directory of Open Access Journals (Sweden)

    Jesmin Khan

    2012-01-01

    Full Text Available We propose a novel technique for detecting rotation- and scale-invariant interest points from the local frequency representation of an image. Local or instantaneous frequency is the spatial derivative of the local phase, where the local phase of any signal can be found from its Hilbert transform. Local frequency estimation can detect edge, ridge, corner, and texture information at the same time, and it shows high values at those dominant features of an image. For each pixel, we select an appropriate width of the window for computing the derivative of the phase. In order to select the width of the window for any given pixel, we make use of the measure of the extent to which the phases, in the neighborhood of that pixel, are in the same direction. The local frequency map, thus obtained, is then thresholded by employing a global thresholding approach to detect the interest or feature points. Repeatability rate, a performance evaluation criterion for an interest point detector, is used to check the geometric stability of the proposed method under different transformations. We present simulation results of the detection of feature points from image utilizing the suggested technique and compare the proposed method with five existing approaches that yield good results. The results prove the efficacy of the proposed feature point detection algorithm. Moreover, in terms of repeatability rate; the results show that the performance of the proposed method with respect to different aspect is compatible with the existing methods.

  2. Feature extraction of multispectral data

    Science.gov (United States)

    Crane, R. B.; Crimmins, T.; Reyer, J. F.

    1973-01-01

    A method is presented for feature extraction of multispectral scanner data. Non-training data is used to demonstrate the reduction in processing time that can be obtained by using feature extraction rather than feature selection.

  3. A method for normalizing pathology images to improve feature extraction for quantitative pathology.

    Science.gov (United States)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  4. Deep feature extraction and combination for remote sensing image classification based on pre-trained CNN models

    Science.gov (United States)

    Chaib, Souleyman; Yao, Hongxun; Gu, Yanfeng; Amrani, Moussa

    2017-07-01

    Understanding a scene provided by Very High Resolution (VHR) satellite imagery has become a more and more challenging problem. In this paper, we propose a new method for scene classification based on different pre-trained Deep Features Learning Models (DFLMs). DFLMs are applied simultaneously to extract deep features from the VHR image scene, and then different basic operators are applied for features combination extracted with different pre-trained Convolutional Neural Networks (CNN) models. We conduct experiments on the public UC Merced benchmark dataset, which contains 21 different areal categories with sub-meter resolution. Experimental results demonstrate the effectiveness of the proposed method, as compared to several state-of-the-art methods.

  5. Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM

    Directory of Open Access Journals (Sweden)

    Nilesh Bhaskarrao Bahadure

    2017-01-01

    Full Text Available The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer aided technology becomes very necessary to overcome these limitations. In this study, to improve the performance and reduce the complexity involves in the medical image segmentation process, we have investigated Berkeley wavelet transformation (BWT based brain tumor segmentation. Furthermore, to improve the accuracy and quality rate of the support vector machine (SVM based classifier, relevant features are extracted from each segmented tissue. The experimental results of proposed technique have been evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 96.51% accuracy, 94.2% specificity, and 97.72% sensitivity, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 0.82 dice similarity index coefficient, which indicates better overlap between the automated (machines extracted tumor region with manually extracted tumor region by radiologists. The simulation results prove the significance in terms of quality parameters and accuracy in comparison to state-of-the-art techniques.

  6. Reproducibility of quantitative high-throughput BI-RADS features extracted from ultrasound images of breast cancer.

    Science.gov (United States)

    Hu, Yuzhou; Qiao, Mengyun; Guo, Yi; Wang, Yuanyuan; Yu, Jinhua; Li, Jiawei; Chang, Cai

    2017-07-01

    Digital Breast Imaging Reporting and Data System (BI-RADS) features extracted from ultrasound images are essential in computer-aided diagnosis, prediction, and prognosis of breast cancer. This study focuses on the reproducibility of quantitative high-throughput BI-RADS features in the presence of variations due to different segmentation results, various ultrasound machine models, and multiple ultrasound machine settings. Dataset 1 consists of 399 patients with invasive breast cancer and is used as the training set to measure the reproducibility of features, while dataset 2 consists of 138 other patients and is a validation set used to evaluate the diagnosis performances of the final reproducible features. Four hundred and sixty high-throughput BI-RADS features are designed and quantized according to BI-RADS lexicon. Concordance Correlation Coefficient (CCC) and Deviation (Dev) are used to assess the effect of the segmentation methods and Between-class Distance (BD) is used to study the influences of the machine models. In addition, the features jointly shared by two methodologies are further investigated on their effects with multiple machine settings. Subsequently, the absolute value of Pearson Correlation Coefficient (R abs ) is applied for redundancy elimination. Finally, the features that are reproducible and not redundant are preserved as the stable feature set. A 10-fold Support Vector Machine (SVM) classifier is employed to verify the diagnostic ability. One hundred and fifty-three features were found to have high reproducibility (CCC > 0.9 & Dev BI-RADS features to various degrees. Our 46 reproducible features were robust to these factors and were capable of distinguishing benign and malignant breast tumors. © 2017 American Association of Physicists in Medicine.

  7. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    Science.gov (United States)

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    International Nuclear Information System (INIS)

    Parekh, V; Jacobs, MA

    2016-01-01

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  9. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, V [The Johns Hopkins University, Computer Science. Baltimore, MD (United States); Jacobs, MA [The Johns Hopkins University School of Medicine, Dept of Radiology and Oncology. Baltimore, MD (United States)

    2016-06-15

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  10. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  11. An artificial intelligence based improved classification of two-phase flow patterns with feature extracted from acquired images.

    Science.gov (United States)

    Shanthi, C; Pappa, N

    2017-05-01

    Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Coloring local feature extraction

    OpenAIRE

    Van De Weijer, Joost; Schmid, Cordelia

    2006-01-01

    International audience; Although color is commonly experienced as an indispensable quality in describing the world around us, state-of-the art local feature-based representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variations which causes the measured color values to vary significantly. In this paper we aim to extend the description of local features with color information. To accomplish a wide applic...

  13. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    Science.gov (United States)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  14. Mapping quantitative trait loci affecting Arabidopsis thaliana seed morphology features extracted computationally from images.

    Science.gov (United States)

    Moore, Candace R; Gronwall, David S; Miller, Nathan D; Spalding, Edgar P

    2013-01-01

    Seeds are studied to understand dispersal and establishment of the next generation, as units of agricultural yield, and for other important reasons. Thus, elucidating the genetic architecture of seed size and shape traits will benefit basic and applied plant biology research. This study sought quantitative trait loci (QTL) controlling the size and shape of Arabidopsis thaliana seeds by computational analysis of seed phenotypes in recombinant inbred lines derived from the small-seeded Landsberg erecta × large-seeded Cape Verde Islands accessions. On the order of 10(3) seeds from each recombinant inbred line were automatically measured with flatbed photo scanners and custom image analysis software. The eight significant QTL affecting seed area explained 63% of the variation, and overlapped with five of the six major-axis (length) QTL and three of the five minor-axis (width) QTL, which accounted for 57% and 38% of the variation in those traits, respectively. Because the Arabidopsis seed is exalbuminous, lacking an endosperm at maturity, the results are relatable to embryo length and width. The Cvi allele generally had a positive effect of 2.6-4.0%. Analysis of variance showed heritability of the three traits ranged between 60% and 73%. Repeating the experiment with 2.2 million seeds from a separate harvest of the RIL population and approximately 0.5 million seeds from 92 near-isogenic lines confirmed the aforementioned results. Structured for download are files containing phenotype measurements, all sets of seed images, and the seed trait measuring tool.

  15. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm.

    Science.gov (United States)

    Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao

    2017-06-22

    To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved.

  16. Computational approach to radiogenomics of breast cancer: Luminal A and luminal B molecular subtypes are associated with imaging features on routine breast MRI extracted using computer vision algorithms.

    Science.gov (United States)

    Grimm, Lars J; Zhang, Jing; Mazurowski, Maciej A

    2015-10-01

    To identify associations between semiautomatically extracted MRI features and breast cancer molecular subtypes. We analyzed routine clinical pre-operative breast MRIs from 275 breast cancer patients at a single institution in this retrospective, Institutional Review Board-approved study. Six fellowship-trained breast imagers reviewed the MRIs and annotated the cancers. Computer vision algorithms were then used to extract 56 imaging features from the cancers including morphologic, texture, and dynamic features. Surrogate markers (estrogen receptor [ER], progesterone receptor [PR], human epidermal growth factor receptor-2 [HER2]) were used to categorize tumors by molecular subtype: ER/PR+, HER2- (luminal A); ER/PR+, HER2+ (luminal B); ER/PR-, HER2+ (HER2); ER/PR/HER2- (basal). A multivariate analysis was used to determine associations between the imaging features and molecular subtype. The imaging features were associated with both luminal A (P = 0.0007) and luminal B (P = 0.0063) molecular subtypes. No association was found for either HER2 (P = 0.2465) or basal (P = 0.1014) molecular subtype and the imaging features. A P-value of 0.0125 (0.05/4) was considered significant. Luminal A and luminal B molecular subtype breast cancer are associated with semiautomatically extracted features from routine contrast enhanced breast MRI. © 2015 Wiley Periodicals, Inc.

  17. Sequential Dimensionality Reduction for Extracting Localized Features

    OpenAIRE

    Casalino, Gabriella; Gillis, Nicolas

    2015-01-01

    Linear dimensionality reduction techniques are powerful tools for image analysis as they allow the identification of important features in a data set. In particular, nonnegative matrix factorization (NMF) has become very popular as it is able to extract sparse, localized and easily interpretable features by imposing an additive combination of nonnegative basis elements. Nonnegative matrix underapproximation (NMU) is a closely related technique that has the advantage to identify features seque...

  18. Feature extraction using fractal codes

    NARCIS (Netherlands)

    B.A.M. Ben Schouten; Paul M. de Zeeuw

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  19. Feature Extraction Using Fractal Codes

    NARCIS (Netherlands)

    B.A.M. Schouten (Ben); P.M. de Zeeuw (Paul)

    1999-01-01

    htmlabstractFast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can

  20. Report of subpanel on feature extraction

    Science.gov (United States)

    1982-01-01

    The state of knowledge in feature extraction for Earth resource observation systems is reviewed and research tasks are proposed. Issues in the subpixel feature estimation problem are defined as: (1) the identification of image models which adequately describe the data and the sensor it is using; (2) the construction of local feature models based on those image models; and (3) the problem of trying to understand these effects of preprocessing on the entire process. The development of ground control point (GCP) libraries for automated selection presents two concerns. One is the organization of these GCP libraries for rectification problems, i.e., the problems of automatically selecting by computer the specific GCP's for particular registration tasks. Second is the importance of integrating ground control patterns in a data base management system, allowing interface to a large number of sensor image types with an automatic selection system. The development of data validation criteria for the comparison of different extraction techniques is also discussed.

  1. Edge and line feature extraction based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand

    Image segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images.

  2. Multispectral Image Feature Points

    Directory of Open Access Journals (Sweden)

    Cristhian Aguilera

    2012-09-01

    Full Text Available This paper presents a novel feature point descriptor for the multispectral image case: Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

  3. Automatic extraction of corpus callosum from midsagittal head MR image and examination of Alzheimer-type dementia objective diagnostic system in feature analysis

    International Nuclear Information System (INIS)

    Kaneko, Tomoyuki; Kodama, Naoki; Kaeriyama, Tomoharu; Fukumoto, Ichiro

    2004-01-01

    We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)

  4. Imaging features of pancreatoblastoma

    International Nuclear Information System (INIS)

    Roebuck, D.J.; Yuen, M.K.; Wong, Y.C.; Shing, M.K.; Li, C.K.; Lee, C.W.

    2001-01-01

    Background. Pancreatoblastoma is a rare tumour of childhood. Reports of the imaging appearances are limited. Objective. To define the imaging features of pancreatoblastoma by analysis of four previously unreported cases and review of the literature. Materials and methods. Findings at CT (n = 4), US (n = 3) and MRI (n = 2) were retrospectively reviewed in four patients with pancreatoblastoma. A Medline search was performed to identify relevant literature. Results. Pancreatoblastoma arises most frequently in the body and/or tail, or involves the entire pancreas. Ultrasonography, CT and MRI show variable imaging features, but should in most cases permit preoperative distinction of pancreatoblastoma from other tumours that occur in this region in infancy and childhood. Detection of metastases in the liver, lymph nodes and peritoneal cavity is not significantly better with any one of these three modalities. Conclusion. Preoperative imaging with US, CT and/or MRI will usually suggest a correct diagnosis of pancreatoblastoma. Contrary to previous reports, the tumour arises in the pancreatic head in a minority of cases. (orig.)

  5. Topological feature extraction and tracking

    International Nuclear Information System (INIS)

    Bremer, P-T; Bringa, E M; Duchaineau, M A; Gyulassy, A G; Laney, D; Mascarenhas, A; Pascucci, V

    2007-01-01

    Scientific datasets obtained by measurement or produced by computational simulations must be analyzed to understand the phenomenon under study. The analysis typically requires a mathematically sound definition of the features of interest and robust algorithms to identify these features, compute statistics about them, and often track them over time. Because scientific datasets often capture phenomena with multi-scale behaviour, and almost always contain noise the definitions and algorithms must be designed with sufficient flexibility and care to allow multi-scale analysis and noise-removal. In this paper, we present some recent work on topological feature extraction and tracking with applications in molecular analysis, combustion simulation, and structural analysis of porous materials

  6. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    Kodama, Naoki; Kaneko, Tomoyuki

    2005-01-01

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  7. A two-dimensional matrix image based feature extraction method for classification of sEMG: A comparative analysis based on SVM, KNN and RBF-NN.

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen

    2017-01-01

    The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.

  8. IMAGE RETIEVAL COLOR, SHAPE AND TEXTURE FEATURES USING CONTENT BASED

    OpenAIRE

    K. NARESH BABU,; SAKE. POTHALAIAH; Dr.K ASHOK BABU

    2010-01-01

    Content-based image retrieval (CBIR) is an important research area for manipulating large amount of image databases and archives. Extraction of invariant features is the basis of CBIR. This paper focuses on the problem of texture, color& shape feature extractions. Using just one feature information for comparing images may cause inaccuracy than compared with using more than one features. Therefore many image retrieval system use many feature information like color, shape and other features. W...

  9. Abdominal tuberculosis: Imaging features

    International Nuclear Information System (INIS)

    Pereira, Jose M.; Madureira, Antonio J.; Vieira, Alberto; Ramos, Isabel

    2005-01-01

    Radiological findings of abdominal tuberculosis can mimic those of many different diseases. A high level of suspicion is required, especially in high-risk population. In this article, we will describe barium studies, ultrasound (US) and computed tomography (CT) findings of abdominal tuberculosis (TB), with emphasis in the latest. We will illustrate CT findings that can help in the diagnosis of abdominal tuberculosis and describe imaging features that differentiate it from other inflammatory and neoplastic diseases, particularly lymphoma and Crohn's disease. As tuberculosis can affect any organ in the abdomen, emphasis is placed to ileocecal involvement, lymphadenopathy, peritonitis and solid organ disease (liver, spleen and pancreas). A positive culture or hystologic analysis of biopsy is still required in many patients for definitive diagnosis. Learning objectives:1.To review the relevant pathophysiology of abdominal tuberculosis. 2.Illustrate CT findings that can help in the diagnosis

  10. Abdominal tuberculosis: Imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Jose M. [Department of Radiology, Hospital de S. Joao, Porto (Portugal)]. E-mail: jmpjesus@yahoo.com; Madureira, Antonio J. [Department of Radiology, Hospital de S. Joao, Porto (Portugal); Vieira, Alberto [Department of Radiology, Hospital de S. Joao, Porto (Portugal); Ramos, Isabel [Department of Radiology, Hospital de S. Joao, Porto (Portugal)

    2005-08-01

    Radiological findings of abdominal tuberculosis can mimic those of many different diseases. A high level of suspicion is required, especially in high-risk population. In this article, we will describe barium studies, ultrasound (US) and computed tomography (CT) findings of abdominal tuberculosis (TB), with emphasis in the latest. We will illustrate CT findings that can help in the diagnosis of abdominal tuberculosis and describe imaging features that differentiate it from other inflammatory and neoplastic diseases, particularly lymphoma and Crohn's disease. As tuberculosis can affect any organ in the abdomen, emphasis is placed to ileocecal involvement, lymphadenopathy, peritonitis and solid organ disease (liver, spleen and pancreas). A positive culture or hystologic analysis of biopsy is still required in many patients for definitive diagnosis. Learning objectives:1.To review the relevant pathophysiology of abdominal tuberculosis. 2.Illustrate CT findings that can help in the diagnosis.

  11. HUMAN IDENTIFICATION BASED ON EXTRACTED GAIT FEATURES

    OpenAIRE

    Hu Ng; Hau-Lee Ton; Wooi-Haw Tan; Timothy Tzen-Vun Yap; Pei-Fen Chong; Junaidi Abdullah

    2011-01-01

    This paper presents a human identification system based on automatically extracted gait features. The proposed approach consists of three parts: extraction of human gait features from enhanced human silhouette, smoothing process on extracted gait features and classification by three classification techniques: fuzzy k- nearest neighbour, linear discriminate analysis and linear support vector machine. The gait features extracted are height, width, crotch height, step-size of the human silhouett...

  12. Localized scleroderma: imaging features

    International Nuclear Information System (INIS)

    Liu, P.; Uziel, Y.; Chuang, S.; Silverman, E.; Krafchik, B.; Laxer, R.

    1994-01-01

    Localized scleroderma is distinct from the diffuse form of scleroderma and does not show Raynaud's phenomenon and visceral involvement. The imaging features in 23 patients ranging from 2 to 17 years of age (mean 11.1 years) were reviewed. Leg length discrepancy and muscle atrophy were the most common findings (five patients), with two patients also showing modelling deformity of the fibula. One patient with lower extremity involvement showed abnormal bone marrow signals on MR. Disabling joint contracture requiring orthopedic intervention was noted in one patient. In two patients with ''en coup de sabre'' facial deformity, CT and MR scans revealed intracranial calcifications and white matter abnormality in the ipsilateral frontal lobes, with one also showing migrational abnormality. In a third patient, CT revealed white matter abnormality in the ipsilateral parietal lobe. In one patient with progressive facial hemiatrophy, CT and MR scans showed the underlying hypoplastic left maxillary antrum and cheek. Imaging studies of areas of clinical concern revealed positive findings in half our patients. (orig.)

  13. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  14. Decision Boundary Feature Extraction for Nonparametric Classification

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    Feature extraction has long been an important topic in pattern recognition. Although many authors have studied feature extraction for parametric classifiers, relatively few feature extraction algorithms are available for nonparametric classifiers. A new feature extraction algorithm based on decision boundaries for nonparametric classifiers is proposed. It is noted that feature extraction for pattern recognition is equivalent to retaining 'discriminantly informative features' and a discriminantly informative feature is related to the decision boundary. Since nonparametric classifiers do not define decision boundaries in analytic form, the decision boundary and normal vectors must be estimated numerically. A procedure to extract discriminantly informative features based on a decision boundary for non-parametric classification is proposed. Experiments show that the proposed algorithm finds effective features for the nonparametric classifier with Parzen density estimation.

  15. Decision boundary feature extraction for neural networks

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David A.

    1992-01-01

    We propose a new feature extraction method for neural networks. The method is based on the recently published decision boundary feature extraction algorithm. It has been shown that all the necessary features for classification can be extracted from the decision boundary. To apply the decision boundary feature extraction method, we first define the decision boundary in neural networks. Next, we propose a procedure for extracting all the necessary features for classification from the decision boundary. The proposed algorithm preserves the characteristics of neural networks, which can define arbitrary decision boundary. Experiments show promising results.

  16. Biometric feature extraction using local fractal auto-correlation

    International Nuclear Information System (INIS)

    Chen Xi; Zhang Jia-Shu

    2014-01-01

    Image texture feature extraction is a classical means for biometric recognition. To extract effective texture feature for matching, we utilize local fractal auto-correlation to construct an effective image texture descriptor. Three main steps are involved in the proposed scheme: (i) using two-dimensional Gabor filter to extract the texture features of biometric images; (ii) calculating the local fractal dimension of Gabor feature under different orientations and scales using fractal auto-correlation algorithm; and (iii) linking the local fractal dimension of Gabor feature under different orientations and scales into a big vector for matching. Experiments and analyses show our proposed scheme is an efficient biometric feature extraction approach. (condensed matter: structural, mechanical, and thermal properties)

  17. Classifications of Image Features: A Survey | Lichun | Discovery and ...

    African Journals Online (AJOL)

    An image feature is a descriptor of an image, which can avoid redundant data and reduce the effects of noise and variance. In computer imaging, feature selection is vital for researchers and processors. Feature extraction and image processing are based on the mathematical selection, computation and manipulation of ...

  18. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  19. Extraction of latent images from printed media

    Science.gov (United States)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  20. Feature Extraction Based on Decision Boundaries

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  1. DIMENSIONALITY REDUCTION OF HYPERSPECTRAL IMAGES BY COMBINATION OF NON-PARAMETRIC WEIGHTED FEATURE EXTRACTION (NWFE AND MODIFIED NEIGHBORHOOD PRESERVING EMBEDDING (NPE

    Directory of Open Access Journals (Sweden)

    T. Alipour Fard

    2014-10-01

    Full Text Available This paper combine two conventional feature extraction methods (NWFE&NPE in a novel framework and present a new semi-supervised feature extraction method called Adjusted Semi supervised Discriminant Analysis (ASEDA. The advantage of this method is dominating the Hughes phenomena, automatic selection of unlabelled pixels, extraction of more than L-1(L: number of classes features and avoidance of singularity or near singularity of within-class scatter matrix. Experimental results on well-known hyperspectral dataset demonstrate that compared to conventional extraction algorithms the overall accuracy of the classification increased.

  2. Dimensionality Reduction of Hyperspectral Images by Combination of Non-Parametric Weighted Feature Extraction (nwfe) and Modified Neighborhood Preserving Embedding (npe)

    Science.gov (United States)

    Alipour Fard, T.; Arefi, H.

    2014-10-01

    This paper combine two conventional feature extraction methods (NWFE&NPE) in a novel framework and present a new semi-supervised feature extraction method called Adjusted Semi supervised Discriminant Analysis (ASEDA). The advantage of this method is dominating the Hughes phenomena, automatic selection of unlabelled pixels, extraction of more than L-1(L: number of classes) features and avoidance of singularity or near singularity of within-class scatter matrix. Experimental results on well-known hyperspectral dataset demonstrate that compared to conventional extraction algorithms the overall accuracy of the classification increased.

  3. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Atul Bansal

    Abstract. Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Ham-.

  4. [A novel spectrum feature extraction method].

    Science.gov (United States)

    Li, Xiang-Ru; Feng, Chun-Ming; Wang, Yong-Jun; Lu, Yu

    2011-10-01

    The present focuses on the celestial spectra feature extraction problem, which is a key procedure in automatic spectra classification. By extracting features, the authors can reduce redundancy, alleviate noise influence, and improve accuracy and efficiency in spectra classification. The authors introduced a novel feature analysis framework STP (space transformation and partition), which focuses on four essential components in feature extraction: decompose and reorganize spectrum components, reorganize, alleviate noise influence and eliminate redundancy. Based on STP, we can analyze most of the available feature extraction methods, for example, the unsupervised methods principal component analysis (PCA), wavelet transform, the supervised methods support vector machine (SVM), relevance vector machine (RVM), linear discriminant analysis (LDA), etc. We introduced a novel feature analysis framework and proposed a novel feature extraction method. The outstanding characteristics of the proposed method are its simplicity and efficiency. Researches show that it is sufficient to extract features by the proposed method in some cases, and it is not necessary to use the sophisticated methods, which is usually more complex in computation. The proposed method is evaluated in classifying Galaxy and QSO spectra, which is disturbed by red shift and is representative in automatic spectra classification research. The results are practical and helpful to gain novel insight into the traditional feature extraction methods and design more efficient spectrum classification method.

  5. Imaging features of thalassemia

    Energy Technology Data Exchange (ETDEWEB)

    Tunaci, M.; Tunaci, A.; Engin, G.; Oezkorkmaz, B.; Acunas, G.; Acunas, B. [Dept. of Radiology, Istanbul Univ. (Turkey); Dincol, G. [Dept. of Internal Medicine, Istanbul Univ. (Turkey)

    1999-07-01

    Thalassemia is a kind of chronic, inherited, microcytic anemia characterized by defective hemoglobin synthesis and ineffective erythropoiesis. In all thalassemias clinical features that result from anemia, transfusional, and absorptive iron overload are similar but vary in severity. The radiographic features of {beta}-thalassemia are due in large part to marrow hyperplasia. Markedly expanded marrow space lead to various skeletal manifestations including spine, skull, facial bones, and ribs. Extramedullary hematopoiesis (ExmH), hemosiderosis, and cholelithiasis are among the non-skeletal manifestations of thalassemia. The skeletal X-ray findings show characteristics of chronic overactivity of the marrow. In this article both skeletal and non-skeletal manifestations of thalassemia are discussed with an overview of X-ray findings, including MRI and CT findings. (orig.)

  6. Pulmonary vasculitis: imaging features

    International Nuclear Information System (INIS)

    Seo, Joon Beom; Im, Jung Gi; Chung, Jin Wook; Goo, Jin Mo; Park, Jae Hyung; Yeon, Kyung Mo; Song, Jae Woo

    1999-01-01

    Vasculitis is defined as an inflammatory process involving blood vessels, and can lead to destruction of the vascular wall and ischemic damage to the organs supplied by these vessels. The lung is commonly affected. A number of attempts have been made to classify and organize pulmonary vasculitis, but because the clinical manifestations and pathologic features of the condition overlap considerably, these afforts have failed to achieve a consensus. We classified pulmonary vasculitis as belonging to either the angitiis-granulomatosis group, the diffuse pulmonary hemorrhage with capillaritis group, or 'other'. Characteristic radiographic and CT findings of the different types of pulmonary vasculitis are illustrated, with a brief discussion of the respective disease entities

  7. Learning Hierarchical Feature Extractors for Image Recognition

    Science.gov (United States)

    2012-09-01

    recognition, but the analysis applies to all tasks which incorporate some form 48 of pooling (e.g., text processing from which the bag-of-features method ...performance rely on solving an `1-regularized optimization. Several efficient algorithms have been devised for this problem. Homotopy methods such as the...recent advances in image recognition. First, we recast many methods into a common unsupervised feature extraction framework based on an alternation of

  8. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents.

    Science.gov (United States)

    Zhang, Jing; Lo, Joseph Y; Kuzmiak, Cherie M; Ghate, Sujata V; Yoon, Sora C; Mazurowski, Maciej A

    2014-09-01

    Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. The authors' algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (perror

  9. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Departments of Biomedical Engineering and Electrical and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  10. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    International Nuclear Information System (INIS)

    Zhang, Jing; Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  11. Electronic Nose Feature Extraction Methods: A Review.

    Science.gov (United States)

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-11-02

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology.

  12. Meta-optimization of the extended kalman filter's parameters for improved feature extraction on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon, BP

    2011-07-01

    Full Text Available yi;k;b via a non-linear measurement function hb. Both these models are possibly non-perfect, so the addition of process wi;k;b and measurement vi;k;b noise is required [5]. This is expressed as xi;k;b = xi;(k 1);b + wi;k;b; (3) and y^i;k;b = hb...(xi;k;b) + vi;k;b: (4) Both state vectors features may be estimated over time k by recursive iteration [5] based on the observation data yi;k;b up to time k. The predicted measurement for the b-th spectral band is denoted by y^i;k;b in (4). Function hb...

  13. imaging features of hepatic angiomyolipomas

    International Nuclear Information System (INIS)

    Low, S.C.S.; Peh, W.C.G.; Muttarak, M.; Cheung, H.S.; Ng, I.O.L.

    2008-01-01

    Full text: We review the imaging appearances of hepatic angiomyolipomas in patients with and without tuberous sclerosis. Sporadic hepatic angiomyolipomas have a varied appearance because of the inconstant proportion of fat, making confident imaging diagnosis difficult and necessitating biopsy in many cases. In patients with tuberous sclerosis, hepatic angiomyolipomas have a more consistent imaging appearance and, together with other features of the syndrome, can be more easily diagnosed. Preoperative diagnosis helps obviate unnecessary surgery.

  14. Image feature detectors and descriptors foundations and applications

    CERN Document Server

    Hassaballah, Mahmoud

    2016-01-01

    This book provides readers with a selection of high-quality chapters that cover both theoretical concepts and practical applications of image feature detectors and descriptors. It serves as reference for researchers and practitioners by featuring survey chapters and research contributions on image feature detectors and descriptors. Additionally, it emphasizes several keywords in both theoretical and practical aspects of image feature extraction. The keywords include acceleration of feature detection and extraction, hardware implantations, image segmentation, evolutionary algorithm, ordinal measures, as well as visual speech recognition. .

  15. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  16. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Atul Bansal

    features obtained from one's face [1], finger [2], voice [3] and/or iris [4, 5]. Iris recognition system is widely used in high security areas. A number of researchers have proposed various algorithms for feature extraction. A little work [6,. 7] however, has been reported using statistical techniques directly on pixel values in order to ...

  17. Imaging features of aggressive angiomyxoma

    International Nuclear Information System (INIS)

    Jeyadevan, N.N.; Sohaib, S.A.A.; Thomas, J.M.; Jeyarajah, A.; Shepherd, J.H.; Fisher, C.

    2003-01-01

    AIM: To describe the imaging features of aggressive angiomyxoma in a rare benign mesenchymal tumour most frequently arising from the perineum in young female patients. MATERIALS AND METHODS: We reviewed the computed tomography (CT) and magnetic resonance (MR) imaging features of patients with aggressive angiomyxoma who were referred to our hospital. The imaging features were correlated with clinical information and pathology in all patients. RESULTS: Four CT and five MR studies were available for five patients (all women, mean age 39, range 24-55). Three patients had recurrent tumour at follow-up. CT and MR imaging demonstrated a well-defined mass-displacing adjacent structures. The tumour was of low attenuation relative to muscle on CT. On MR, the tumour was isointense relative to muscle on T1-weighted image, hyperintense on T2-weighted image and enhanced avidly after gadolinium contrast with a characteristic 'swirled' internal pattern. MR imaging demonstrates the extent of the tumour and its relation to the pelvic floor. Recurrent tumour has a similar appearance to the primary lesion. CONCLUSION: The MR appearances of aggressive angiomyxomas are characteristic, and the diagnosis should be considered in any young woman presenting with a well-defined mass arising from the perineum. Jeyadevan, N. N. etal. (2003). Clinical Radiology58, 157--162

  18. Imaging features of aggressive angiomyxoma

    Energy Technology Data Exchange (ETDEWEB)

    Jeyadevan, N.N.; Sohaib, S.A.A.; Thomas, J.M.; Jeyarajah, A.; Shepherd, J.H.; Fisher, C

    2003-02-01

    AIM: To describe the imaging features of aggressive angiomyxoma in a rare benign mesenchymal tumour most frequently arising from the perineum in young female patients. MATERIALS AND METHODS: We reviewed the computed tomography (CT) and magnetic resonance (MR) imaging features of patients with aggressive angiomyxoma who were referred to our hospital. The imaging features were correlated with clinical information and pathology in all patients. RESULTS: Four CT and five MR studies were available for five patients (all women, mean age 39, range 24-55). Three patients had recurrent tumour at follow-up. CT and MR imaging demonstrated a well-defined mass-displacing adjacent structures. The tumour was of low attenuation relative to muscle on CT. On MR, the tumour was isointense relative to muscle on T1-weighted image, hyperintense on T2-weighted image and enhanced avidly after gadolinium contrast with a characteristic 'swirled' internal pattern. MR imaging demonstrates the extent of the tumour and its relation to the pelvic floor. Recurrent tumour has a similar appearance to the primary lesion. CONCLUSION: The MR appearances of aggressive angiomyxomas are characteristic, and the diagnosis should be considered in any young woman presenting with a well-defined mass arising from the perineum. Jeyadevan, N. N. etal. (2003). Clinical Radiology58, 157--162.

  19. On-line object feature extraction for multispectral scene representation

    Science.gov (United States)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  20. Feature extraction of the wafer probe marks in IC packaging

    Science.gov (United States)

    Tsai, Cheng-Yu; Lin, Chia-Te; Kao, Chen-Ting; Wang, Chau-Shing

    2017-12-01

    This paper presents an image processing approach to extract six features of the probe mark on semiconductor wafer pads. The electrical characteristics of the chip pad must be tested using a probing needle before wire-bonding to the wafer. However, this test leaves probe marks on the pad. A large probe mark area results in poor adhesion forces at the bond ball of the pad, thus leading to undesirable products. In this paper, we present a method to extract six features of the wafer probe marks in IC packaging for further digital image processing.

  1. Imaging features of kaposiform lymphangiomatosis

    International Nuclear Information System (INIS)

    Goyal, Pradeep; Alomari, Ahmad I.; Shaikh, Raja; Chaudry, Gulraiz; Kozakewich, Harry P.; Perez-Atayde, Antonio R.; Trenor, Cameron C.; Fishman, Steven J.; Greene, Arin K.

    2016-01-01

    Kaposiform lymphangiomatosis is a rare, aggressive lymphatic disorder. The imaging and presenting features of kaposiform lymphangiomatosis can overlap with those of central conducting lymphatic anomaly and generalized lymphatic anomaly. To analyze the imaging findings of kaposiform lymphangiomatosis disorder and highlight features most suggestive of this diagnosis. We retrospectively identified and characterized 20 children and young adults with histopathological diagnosis of kaposiform lymphangiomatosis and radiologic imaging referred to the vascular anomalies center between 1995 and 2015. The median age at onset was 6.5 years (range 3 months to 27 years). The most common presenting features were respiratory compromise (dyspnea, cough, chest pain; 55.5%), swelling/mass (25%), bleeding (15%) and fracture (5%). The thoracic cavity was involved in all patients; all patients had mediastinal involvement followed by lung parenchymal disease (90%) and pleural (85%) and pericardial (50%) effusions. The most common extra-thoracic sites of disease were the retroperitoneum (80%), bone (60%), abdominal viscera (55%) and muscles (45%). There was characteristic enhancing and infiltrative soft-tissue thickening in the mediastinum and retroperitoneum extending along the lymphatic distribution. Kaposiform lymphangiomatosis has overlapping imaging features with central conducting lymphatic anomaly and generalized lymphatic anomaly. Presence of mediastinal or retroperitoneal enhancing and infiltrative soft-tissue disease along the lymphatic distribution, hemorrhagic effusions and moderate thrombocytopenia (50-100,000/μl) should favor diagnosis of kaposiform lymphangiomatosis. (orig.)

  2. Feature-enhanced synthetic aperture radar imaging

    Science.gov (United States)

    Cetin, Mujdat

    Remotely sensed images have already attained an important role in a wide spectrum of tasks ranging from weather forecasting to battlefield reconnaissance. One of the most promising remote sensing technologies is the imaging radar, known as synthetic aperture radar (SAR). SAR overcomes the nighttime limitations of optical cameras, and the cloud-cover limitations of both optical and infrared imagers. In current systems, techniques such as the polar format algorithm are used to form images from the collected SAR data. These images are then interpreted by human observers. However, the anticipated high data rates and the time critical nature of emerging SAR tasks motivate the use of automated processing or decision-making techniques in information extraction from the reconstructed images. The success of such automated decision-making (e.g. object recognition) depends on how well SAR images exhibit certain features of the underlying scene. Unfortunately, current SAR image formation techniques have no explicit means to highlight features useful for automatic interpretation. Furthermore, these techniques are usually not robust to reduced quality or quantity of data. We have developed a mathematical foundation and associated algorithms for feature-enhanced SAR imaging to address such challenges. Our framework is based on a regularized reconstruction of the scattering field which combines a tomographic model of the SAR observation process with prior information regarding the nature of the features of interest. We demonstrate the inclusion of prior information through a variety of non-quadratic potential functions. Efficient and robust numerical solution of the optimization problems posed in our framework is achieved through novel extensions of half-quadratic regularization methods to the complex-valued SAR problem. We have established a methodology for quantitative evaluation of a SAR image formation technique based on recognition-oriented features. Through qualitative and

  3. Neuroimaging Feature Terminology: A Controlled Terminology for the Annotation of Brain Imaging Features

    Science.gov (United States)

    Iyappan, Anandhi; Younesi, Erfan; Redolfi, Alberto; Vrooman, Henri; Khanna, Shashank; Frisoni, Giovanni B.; Hofmann-Apitius, Martin

    2017-01-01

    Ontologies and terminologies are used for interoperability of knowledge and data in a standard manner among interdisciplinary research groups. Existing imaging ontologies capture general aspects of the imaging domain as a whole such as methodological concepts or calibrations of imaging instruments. However, none of the existing ontologies covers the diagnostic features measured by imaging technologies in the context of neurodegenerative diseases. Therefore, the Neuro-Imaging Feature Terminology (NIFT) was developed to organize the knowledge domain of measured brain features in association with neurodegenerative diseases by imaging technologies. The purpose is to identify quantitative imaging biomarkers that can be extracted from multi-modal brain imaging data. This terminology attempts to cover measured features and parameters in brain scans relevant to disease progression. In this paper, we demonstrate the systematic retrieval of measured indices from literature and how the extracted knowledge can be further used for disease modeling that integrates neuroimaging features with molecular processes. PMID:28731430

  4. Extracting Features of Acacia Plantation and Natural Forest in the Mountainous Region of Sarawak, Malaysia by ALOS/AVNIR2 Image

    Science.gov (United States)

    Fadaei, H.; Ishii, R.; Suzuki, R.; Kendawang, J.

    2013-12-01

    The remote sensing technique has provided useful information to detect spatio-temporal changes in the land cover of tropical forests. Land cover characteristics derived from satellite image can be applied to the estimation of ecosystem services and biodiversity over an extensive area, and such land cover information would provide valuable information to global and local people to understand the significance of the tropical ecosystem. This study was conducted in the Acacia plantations and natural forest situated in the mountainous region which has different ecological characteristic from that in flat and low land area in Sarawak, Malaysia. The main objective of this study is to compare extract the characteristic of them by analyzing the ALOS/AVNIR2 images and ground truthing obtained by the forest survey. We implemented a ground-based forest survey at Aacia plantations and natural forest in the mountainous region in Sarawak, Malaysia in June, 2013 and acquired the forest structure data (tree height, diameter at breast height (DBH), crown diameter, tree spacing) and spectral reflectance data at the three sample plots of Acacia plantation that has 10 x 10m area. As for the spectral reflectance data, we measured the spectral reflectance of the end members of forest such as leaves, stems, road surface, and forest floor by the spectro-radiometer. Such forest structure and spectral data were incorporated into the image analysis by support vector machine (SVM) and object-base/texture analysis. Consequently, land covers on the AVNIR2 image were classified into three forest types (natural forest, oil palm plantation and acacia mangium plantation), then the characteristic of each category was examined. We additionally used the tree age data of acacia plantation for the classification. A unique feature was found in vegetation spectral reflectance of Acacia plantations. The curve of the spectral reflectance shows two peaks around 0.3μm and 0.6 - 0.8μm that can be assumed to

  5. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  6. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  7. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    imagery, and 2D/3D acoustic images (from hydrographic surveys). The application involving satellite imagery shown in this paper is coastline detection, but the methodology can be easily applied to feature extraction on any king of imagery. A prototype application that is developed as part of this research...

  8. Imaging features of musculoskeletal tuberculosis

    International Nuclear Information System (INIS)

    Vuyst, Dimitri De; Vanhoenacker, Filip; Bernaerts, Anja; Gielen, Jan; Schepper, Arthur M. de

    2003-01-01

    The purpose of this article is to review the imaging characteristics of musculoskeletal tuberculosis. Skeletal tuberculosis represents one-third of all cases of tuberculosis occurring in extrapulmonary sites. Hematogenous spread from a distant focus elsewhere in the body is the cornerstone in the understanding of imaging features of musculoskeletal tuberculosis. The most common presentations are tuberculous spondylitis, arthritis, osteomyelitis, and soft tissue involvement. The diagnostic value of the different imaging techniques, which include conventional radiography, CT, and MR imaging, are emphasized. Whereas conventional radiography is the mainstay in the diagnosis of tuberculous arthritis and osteomyelitis, MR imaging may detect associated bone marrow and soft tissue abnormalities. MR imaging is generally accepted as the imaging modality of choice for diagnosis, demonstration of the extent of the disease of tuberculous spondylitis, and soft tissue tuberculosis. Moreover, it may be very helpful in the differential diagnosis with pyogenic spondylodiscitis, as it may easily demonstrate anterior corner destruction, the relative preservation of the intervertebral disk, multilevel involvement with or without skip lesions, and a large soft tissue abscess, as these are all arguments in favor of a tuberculous spondylitis. On the other hand, CT is still superior in the demonstration of calcifications, which are found in chronic tuberculous abscesses. (orig.)

  9. Imaging features of musculoskeletal tuberculosis

    Energy Technology Data Exchange (ETDEWEB)

    Vuyst, Dimitri De [Department of Radiology, AZ Sint-Maarten, Campus Duffel, Rooienberg 25, 2570 Duffel (Belgium); Vanhoenacker, Filip; Bernaerts, Anja [Department of Radiology, AZ Sint-Maarten, Campus Duffel, Rooienberg 25, 2570 Duffel (Belgium); Department of Radiology, University Hospital Antwerp, Wilrijkstraat 10, 2650 Edegem (Belgium); Gielen, Jan; Schepper, Arthur M. de [Department of Radiology, University Hospital Antwerp, Wilrijkstraat 10, 2650 Edegem (Belgium)

    2003-08-01

    The purpose of this article is to review the imaging characteristics of musculoskeletal tuberculosis. Skeletal tuberculosis represents one-third of all cases of tuberculosis occurring in extrapulmonary sites. Hematogenous spread from a distant focus elsewhere in the body is the cornerstone in the understanding of imaging features of musculoskeletal tuberculosis. The most common presentations are tuberculous spondylitis, arthritis, osteomyelitis, and soft tissue involvement. The diagnostic value of the different imaging techniques, which include conventional radiography, CT, and MR imaging, are emphasized. Whereas conventional radiography is the mainstay in the diagnosis of tuberculous arthritis and osteomyelitis, MR imaging may detect associated bone marrow and soft tissue abnormalities. MR imaging is generally accepted as the imaging modality of choice for diagnosis, demonstration of the extent of the disease of tuberculous spondylitis, and soft tissue tuberculosis. Moreover, it may be very helpful in the differential diagnosis with pyogenic spondylodiscitis, as it may easily demonstrate anterior corner destruction, the relative preservation of the intervertebral disk, multilevel involvement with or without skip lesions, and a large soft tissue abscess, as these are all arguments in favor of a tuberculous spondylitis. On the other hand, CT is still superior in the demonstration of calcifications, which are found in chronic tuberculous abscesses. (orig.)

  10. Imaging features of female pseudohermaphroditism

    International Nuclear Information System (INIS)

    Wang Jian; Han Xinian; Liu Guanghua; Wang Chenguang; Jia Ningyang; Xue Feng

    2005-01-01

    Objective: To evaluate the imaging features of female pseudohermaphroditism. Methods: The imaging findings in 9 cases of female pseudohermaphroditism were analyzed retrospectively. Results: Thickening, prolongation, and twist of bilateral adrenal glands was found in 7 untreated cases, 2 of them had macronodular hyperplasia. One of the treated cases was without thickening or twist of adrenal glands, the other treated case coexisted with adrenal myelolipoma. Agenesis of the uterus and vagina was found in 4 cases. Conclusion: Female pseudohermaphroditism is a hereditary disease, and the hyperplasia of adrenal glands and agenesis of uterus and vagina were secondary manifestations. Early detection of these abnormalities could be achieved with imaging modalities, and early treatment could result in remedy of these abnormalities. (authors)

  11. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  12. Feature representation of RGB-D images using joint spatial-depth feature pooling

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2016-01-01

    using the depth cue and pools features simultaneously in 2D image plane and along the depth direction. By combining the JSDP with standard feature extraction and feature encoding modules, we outperform state-of-the-art methods on benchmarks for RGB-D object classification, detection and scene......Recent development in depth imaging technology makes acquisition of depth information easier. With the additional depth cue, RGB-D cameras can provide effective support for many RGB-D perception tasks beyond traditional RGB information. However, current feature representation based on RGB-D images...... utilizes depth information only to extract local features, without considering it to improve robustness and discriminability of the feature representation by merging depth cues into feature pooling. Spatial pyramid model (SPM) has become the standard protocol to split a 2D image plane into sub...

  13. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  14. A concept for extraction of habitat features from laser scanning and hypersprectral imaging for evaluation of Natura 2000 sites - the ChangeHabitats2 project approach

    Science.gov (United States)

    Székely, B.; Kania, A.; Pfeifer, N.; Heilmeier, H.; Tamás, J.; Szöllősi, N.; Mücke, W.

    2012-04-01

    The goal of the ChangeHabitats2 project is the development of cost- and time-efficient habitat assessment strategies by employing effective field work techniques supported by modern airborne remote sensing methods, i.e. hyperspectral imagery and laser scanning (LiDAR). An essential task of the project is the design of a novel field work technique that on the one hand fulfills the reporting requirements of the Flora-Fauna-Habitat (FFH-) directive and on the other hand serves as a reference for the aerial data analysis. Correlations between parameters derived from remotely sensed data and terrestrial field measurements shall be exploited in order to create half- or fully-automated methods for the extraction of relevant Natura2000 habitat parameters. As a result of these efforts a comprehensive conceptual model has been developed for extraction and integration of Natura 2000 relevant geospatial data. This scheme is an attempt to integrate various activities within ChangeHabitats2 project defining pathways of development, as well as encompassing existing data processing chains, theoretical approaches and field work. The conceptual model includes definition of processing levels (similar to those existing in remote sensing), whereas these levels cover the range from the raw data to the extracted habitat feature. For instance, the amount of dead wood (standing or lying on the surface) is an important evaluation criterion for the habitat. The tree trunks lying on the ground surface typically can be extracted from the LiDAR point cloud, and the amount of wood can be estimated accordingly. The final result will be considered as a habitat feature derived from laser scanning data. Furthermore, we are also interested not only in the determination of the specific habitat feature, but also in the detection of its variations (especially in deterioration). In this approach the variation of this important habitat feature is considered to be a differential habitat feature, that can

  15. Modified kernel-based nonlinear feature extraction.

    Energy Technology Data Exchange (ETDEWEB)

    Ma, J. (Junshui); Perkins, S. J. (Simon J.); Theiler, J. P. (James P.); Ahalt, S. (Stanley)

    2002-01-01

    Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determined by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.

  16. Dominant color and texture feature extraction for banknote discrimination

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-07-01

    Banknote discrimination with image recognition technology is significant in many applications. The traditional methods based on image recognition only recognize the banknote denomination without discriminating the counterfeit banknote. To solve this problem, we propose a systematical banknote discrimination approach with the dominant color and texture features. After capturing the visible and infrared images of the test banknote, we first implement the tilt correction based on the principal component analysis (PCA) algorithm. Second, we extract the dominant color feature of the visible banknote image to recognize the denomination. Third, we propose an adaptively weighted local binary pattern with "delta" tolerance algorithm to extract the texture features of the infrared banknote image. At last, we discriminate the genuine or counterfeit banknote by comparing the texture features between the test banknote and the benchmark banknote. The proposed approach is tested using 14,000 banknotes of six different denominations from Chinese yuan (CNY). The experimental results show 100% accuracy for denomination recognition and 99.92% accuracy for counterfeit banknote discrimination.

  17. Robust Discriminant Regression for Feature Extraction.

    Science.gov (United States)

    Lai, Zhihui; Mo, Dongmei; Wong, Wai Keung; Xu, Yong; Miao, Duoqian; Zhang, David

    2017-10-09

    Ridge regression (RR) and its extended versions are widely used as an effective feature extraction method in pattern recognition. However, the RR-based methods are sensitive to the variations of data and can learn only limited number of projections for feature extraction and recognition. To address these problems, we propose a new method called robust discriminant regression (RDR) for feature extraction. In order to enhance the robustness, the L₂,₁-norm is used as the basic metric in the proposed RDR. The designed robust objective function in regression form can be solved by an iterative algorithm containing an eigenfunction, through which the optimal orthogonal projections of RDR can be obtained by eigen decomposition. The convergence analysis and computational complexity are presented. In addition, we also explore the intrinsic connections and differences between the RDR and some previous methods. Experiments on some well-known databases show that RDR is superior to the classical and very recent proposed methods reported in the literature, no matter the L₂-norm or the L₂,₁-norm-based regression methods. The code of this paper can be downloaded from http://www.scholat.com/laizhihui.

  18. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  19. Automatic seamless image mosaic method based on SIFT features

    Science.gov (United States)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  20. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...... and heterogeneous, congruent and non-congruent images are used to illustrate how the described methods work and to compare some of them...

  1. COMPACT AND HYBRID FEATURE DESCRIPTION FOR BUILDING EXTRACTION

    Directory of Open Access Journals (Sweden)

    Z. Li

    2017-05-01

    Full Text Available Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  2. Compact and Hybrid Feature Description for Building Extraction

    Science.gov (United States)

    Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.

    2017-05-01

    Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  3. Reaction Decoder Tool (RDT): extracting features from chemical reactions

    OpenAIRE

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Mart?nez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W.; Holliday, Gemma L.; Steinbeck, Christoph; Thornton, Janet M.

    2016-01-01

    Summary: Extracting chemical features like Atom?Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and...

  4. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  5. Multi-Image Road Extraction

    National Research Council Canada - National Science Library

    Harvey, W. A; Cochran, Steven D; McKeown, David M

    2005-01-01

    .... It also supports direct extraction of 3D information along the path of the road. Determination of road elevation has significant implications for reducing cost and time in applications requiring cartographic features with full 3D attribution...

  6. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  7. Automated Fluid Feature Extraction from Transient Simulations

    Science.gov (United States)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  8. Features extraction in anterior and posterior cruciate ligaments analysis.

    Science.gov (United States)

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK). Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Advancements in Research on Micro-motion Feature Extraction in the Terahertz Region

    Directory of Open Access Journals (Sweden)

    Yang Qi

    2018-02-01

    Full Text Available With years of development and accumulation, a considerable amount of research has focused on micro-motion, an important auxiliary feature in radar target detection and recognition. With the recent rise of terahertz, micro-motion feature extraction in the terahertz region has increasingly highlighted its advantages. Herein, we systematically surveyed the recent research on terahertz radar micro-motion feature extraction and discussed micro-motion feature analysis, micro-motion feature extraction, and micro-motion target imaging. And then we emphatically introduced the work of our research team, including the theoretical and experimental research on micro-motion feature analysis, micro-motion feature extraction and high-resolution/high-frame micro-motion target imaging. Furthermore, we analyzed the growing trend of micro-motion feature extraction in the terahertz region, and pointed out the new technology directions worth to be studied further and the technical challenges to be solved.

  10. Texture Feature Extraction and Classification for Iris Diagnosis

    Science.gov (United States)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  11. Feature Extraction and Analysis of Breast Cancer Specimen

    Science.gov (United States)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  12. Feature extraction algorithm for space targets based on fractal theory

    Science.gov (United States)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  13. INTEGRATION OF IMAGE-DERIVED AND POS-DERIVED FEATURES FOR IMAGE BLUR DETECTION

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2016-06-01

    Full Text Available The image quality plays an important role for Unmanned Aerial Vehicle (UAV’s applications. The small fixed wings UAV is suffering from the image blur due to the crosswind and the turbulence. Position and Orientation System (POS, which provides the position and orientation information, is installed onto an UAV to enable acquisition of UAV trajectory. It can be used to calculate the positional and angular velocities when the camera shutter is open. This study proposes a POS-assisted method to detect the blur image. The major steps include feature extraction, blur image detection and verification. In feature extraction, this study extracts different features from images and POS. The image-derived features include mean and standard deviation of image gradient. For POS-derived features, we modify the traditional degree-of-linear-blur (blinear method to degree-of-motion-blur (bmotion based on the collinear condition equations and POS parameters. Besides, POS parameters such as positional and angular velocities are also adopted as POS-derived features. In blur detection, this study uses Support Vector Machines (SVM classifier and extracted features (i.e. image information, POS data, blinear and bmotion to separate blur and sharp UAV images. The experiment utilizes SenseFly eBee UAV system. The number of image is 129. In blur image detection, we use the proposed degree-of-motion-blur and other image features to classify the blur image and sharp images. The classification result shows that the overall accuracy using image features is only 56%. The integration of image-derived and POS-derived features have improved the overall accuracy from 56% to 76% in blur detection. Besides, this study indicates that the performance of the proposed degree-of-motion-blur is better than the traditional degree-of-linear-blur.

  14. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  15. The Effect of Image Enhancement Methods during Feature Detection and Matching of Thermal Images

    Science.gov (United States)

    Akcay, O.; Avsar, E. O.

    2017-05-01

    A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Features (SURF) algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  16. Diabetic Rethinopathy Screening by Bright Lesions Extraction from Fundus Images

    Science.gov (United States)

    Hanđsková, Veronika; Pavlovičova, Jarmila; Oravec, Miloš; Blaško, Radoslav

    2013-09-01

    Retinal images are nowadays widely used to diagnose many diseases, for example diabetic retinopathy. In our work, we propose the algorithm for the screening application, which identifies the patients with such severe diabetic complication as diabetic retinopathy is, in early phase. In the application we use the patient's fundus photography without any additional examination by an ophtalmologist. After this screening identification, other examination methods should be considered and the patient's follow-up by a doctor is necessary. Our application is composed of three principal modules including fundus image preprocessing, feature extraction and feature classification. Image preprocessing module has the role of luminance normalization, contrast enhancement and optical disk masking. Feature extraction module includes two stages: bright lesions candidates localization and candidates feature extraction. We selected 16 statistical and structural features. For feature classification, we use multilayer perceptron (MLP) with one hidden layer. We classify images into two classes. Feature classification efficiency is about 93 percent.

  17. Research of image matching algorithm based on local features

    Science.gov (United States)

    Sun, Wei

    2015-07-01

    For the problem of low efficiency in SIFT algorithm while using exhaustive method to search the nearest neighbor and next nearest neighbor of feature points, this paper introduces K-D tree algorithm, to index the feature points extracted in database images according to the tree structure, at the same time, using the concept of a weighted priority, further improves the algorithm, to further enhance the efficiency of feature matching.

  18. Research on Forest Flame Recognition Algorithm Based on Image Feature

    Science.gov (United States)

    Wang, Z.; Liu, P.; Cui, T.

    2017-09-01

    In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  19. RESEARCH ON FOREST FLAME RECOGNITION ALGORITHM BASED ON IMAGE FEATURE

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  20. Pain detection from facial images using unsupervised feature learning approach.

    Science.gov (United States)

    Kharghanian, Reza; Peiravi, Ali; Moradi, Farshad

    2016-08-01

    In this paper a new method for continuous pain detection is proposed. One approach to detect the presence of pain is by processing images taken from the face. It has been reported that expression of pain from the face can be detected utilizing Action Units (AUs). In this manner, each action units must be detected separately and then combined together through a linear expression. Also, pain detection can be directly done from a painful face. There are different methods to extract features of both shape and appearance. Shape and appearance features must be extracted separately, and then used to train a classifier. Here, a hierarchical unsupervised feature learning approach is proposed in order to extract the features needed for pain detection from facial images. In this work, features are extracted using convolutional deep belief network (CDBN). The extracted features include different properties of painful images such as head movements, shape and appearance information. The proposed model was tested on the publicly available UNBC MacMaster Shoulder Pain Archive Database and we achieved near 95% for the area under ROC curve metric that is prominent with respect to the other reported results.

  1. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  2. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  3. Identifying Image Manipulation Software from Image Features

    Science.gov (United States)

    2015-03-26

    scales”. Educational and Psychological Measurement, 20(1):37, 1960. 7. Committee, Technical Standardization. Exchangeable image file format for digital...Digital Forensics. Springer, 2005. 23. Photography, Technical Committee. Photography and graphic technology - Ex- tended colour encodings for digital image

  4. The optimal extraction of feature algorithm based on KAZE

    Science.gov (United States)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  5. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  6. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  7. DOCUMENT IMAGE REGISTRATION FOR IMPOSED LAYER EXTRACTION

    Directory of Open Access Journals (Sweden)

    Surabhi Narayan

    2017-02-01

    Full Text Available Extraction of filled-in information from document images in the presence of template poses challenges due to geometrical distortion. Filled-in document image consists of null background, general information foreground and vital information imposed layer. Template document image consists of null background and general information foreground layer. In this paper a novel document image registration technique has been proposed to extract imposed layer from input document image. A convex polygon is constructed around the content of the input and the template image using convex hull. The vertices of the convex polygons of input and template are paired based on minimum Euclidean distance. Each vertex of the input convex polygon is subjected to transformation for the permutable combinations of rotation and scaling. Translation is handled by tight crop. For every transformation of the input vertices, Minimum Hausdorff distance (MHD is computed. Minimum Hausdorff distance identifies the rotation and scaling values by which the input image should be transformed to align it to the template. Since transformation is an estimation process, the components in the input image do not overlay exactly on the components in the template, therefore connected component technique is applied to extract contour boxes at word level to identify partially overlapping components. Geometrical features such as density, area and degree of overlapping are extracted and compared between partially overlapping components to identify and eliminate components common to input image and template image. The residue constitutes imposed layer. Experimental results indicate the efficacy of the proposed model with computational complexity. Experiment has been conducted on variety of filled-in forms, applications and bank cheques. Data sets have been generated as test sets for comparative analysis.

  8. Biomedical imaging modality classification using combined visual features and textual terms.

    Science.gov (United States)

    Han, Xian-Hua; Chen, Yen-Wei

    2011-01-01

    We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  9. Fuzzy logic techniques for blotch feature evaluation in dermoscopy images.

    Science.gov (United States)

    Khan, Azmath; Gupta, Kapil; Stanley, R J; Stoecker, William V; Moss, Randy H; Argenziano, Giuseppe; Soyer, H Peter; Rabinovitz, Harold S; Cognetta, Armand B

    2009-01-01

    Blotches, also called structureless areas, are critical in differentiating malignant melanoma from benign lesions in dermoscopy skin lesion images. In this paper, fuzzy logic techniques are investigated for the automatic detection of blotch features for malignant melanoma discrimination. Four fuzzy sets representative of blotch size and relative and absolute blotch colors are used to extract blotchy areas from a set of dermoscopy skin lesion images. Five previously reported blotch features are computed from the extracted blotches as well as four new features. Using a neural network classifier, malignant melanoma discrimination results are optimized over the range of possible alpha-cuts and compared with results using crisp blotch features. Features computed from blotches using the fuzzy logic techniques based on three plane relative color and blotch size yield the highest diagnostic accuracy of 81.2%.

  10. Semantic image segmentation with fused CNN features

    Science.gov (United States)

    Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan

    2017-09-01

    Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.

  11. Image fusion using sparse overcomplete feature dictionaries

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  12. Diffusion tensor image registration using hybrid connectivity and tensor features.

    Science.gov (United States)

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-07-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. Copyright © 2013 Wiley Periodicals, Inc.

  13. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  14. Feature Importance for Human Epithelial (HEp-2 Cell Image Classification

    Directory of Open Access Journals (Sweden)

    Vibha Gupta

    2018-02-01

    Full Text Available Indirect Immuno-Fluorescence (IIF microscopy imaging of human epithelial (HEp-2 cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches are based on extracting some representative features from the images. This work explores the selection of the most distinctive features for HEp-2 cell images using various feature selection (FS methods. Considering that there is no single universally optimal feature selection technique, we also propose hybridization of one class of FS methods (filter methods. Furthermore, the notion of variable importance for ranking features, provided by another type of approaches (embedded methods such as Random forest, Random uniform forest is exploited to select a good subset of features from a large set, such that addition of new features does not increase classification accuracy. In this work, we have also, with great consideration, designed class-specific features to capture morphological visual traits of the cell patterns. We perform various experiments and discussions to demonstrate the effectiveness of FS methods along with proposed and a standard feature set. We achieve state-of-the-art performance even with small number of features, obtained after the feature selection.

  15. Imaging features of iliopsoas bursitis

    Energy Technology Data Exchange (ETDEWEB)

    Wunderbaldinger, P. [Department of Radiology, University of Vienna (Austria); Center of Molecular Imaging Research, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA (United States); Bremer, C. [Department of Radiology, University of Muenster (Germany); Schellenberger, E. [Center of Molecular Imaging Research, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA (United States); Department of Radiology, Martin-Luther University of Halle-Wittenberg, Halle (Germany); Cejna, M.; Turetschek, K.; Kainberger, F. [Department of Radiology, University of Vienna (Austria)

    2002-02-01

    The aim of this study was firstly to describe the spectrum of imaging findings seen in iliopsoas bursitis, and secondly to compare cross-sectional imaging techniques in the demonstration of the extent, size and appearance of the iliopsoas bursitis as referenced by surgery. Imaging studies of 18 patients (13 women, 5 men; mean age 53 years) with surgically proven iliopsoas bursitis were reviewed. All patients received conventional radiographs of the pelvis and hip, US and MR imaging of the hip. The CT was performed in 5 of the 18 patients. Ultrasound, CT and MR all demonstrated enlarged iliopsoas bursae. The bursal wall was thin and well defined in 83% and thickened in 17% of all cases. The two cases with septations on US were not seen by CT and MRI. A communication between the bursa and the hip joint was seen, and surgically verified, in all 18 patients by MR imaging, whereas US and CT failed to demonstrate it in 44 and 40% of the cases, respectively. Hip joint effusion was seen and verified by surgery in 16 patients by MRI, whereas CT (4 of 5) and US (n=12) underestimated the number. The overall size of the bursa corresponded best between MRI and surgery, whereas CT and US tended to underestimate the size. Contrast enhancement of the bursal wall was seen in all cases. The imaging characteristics of iliopsoas bursitis are a well-defined, thin-walled cystic mass with a communication to the hip joint and peripheral contrast enhancement. The most accurate way to assess iliopsoas bursitis is with MR imaging; thus, it should be used for accurate therapy planning and follow-up studies. In order to initially prove an iliopsoas bursitis, US is the most cost-effective, easy-to-perform and fast alternative. (orig.)

  16. Imaging features of iliopsoas bursitis

    International Nuclear Information System (INIS)

    Wunderbaldinger, P.; Bremer, C.; Schellenberger, E.; Cejna, M.; Turetschek, K.; Kainberger, F.

    2002-01-01

    The aim of this study was firstly to describe the spectrum of imaging findings seen in iliopsoas bursitis, and secondly to compare cross-sectional imaging techniques in the demonstration of the extent, size and appearance of the iliopsoas bursitis as referenced by surgery. Imaging studies of 18 patients (13 women, 5 men; mean age 53 years) with surgically proven iliopsoas bursitis were reviewed. All patients received conventional radiographs of the pelvis and hip, US and MR imaging of the hip. The CT was performed in 5 of the 18 patients. Ultrasound, CT and MR all demonstrated enlarged iliopsoas bursae. The bursal wall was thin and well defined in 83% and thickened in 17% of all cases. The two cases with septations on US were not seen by CT and MRI. A communication between the bursa and the hip joint was seen, and surgically verified, in all 18 patients by MR imaging, whereas US and CT failed to demonstrate it in 44 and 40% of the cases, respectively. Hip joint effusion was seen and verified by surgery in 16 patients by MRI, whereas CT (4 of 5) and US (n=12) underestimated the number. The overall size of the bursa corresponded best between MRI and surgery, whereas CT and US tended to underestimate the size. Contrast enhancement of the bursal wall was seen in all cases. The imaging characteristics of iliopsoas bursitis are a well-defined, thin-walled cystic mass with a communication to the hip joint and peripheral contrast enhancement. The most accurate way to assess iliopsoas bursitis is with MR imaging; thus, it should be used for accurate therapy planning and follow-up studies. In order to initially prove an iliopsoas bursitis, US is the most cost-effective, easy-to-perform and fast alternative. (orig.)

  17. Semi-Supervised Feature Transformation for Tissue Image Classification.

    Directory of Open Access Journals (Sweden)

    Kenji Watanabe

    Full Text Available Various systems have been proposed to support biological image analysis, with the intent of decreasing false annotations and reducing the heavy burden on biologists. These systems generally comprise a feature extraction method and a classification method. Task-oriented methods for feature extraction leverage characteristic images for each problem, and they are very effective at improving the classification accuracy. However, it is difficult to utilize such feature extraction methods for versatile task in practice, because few biologists specialize in Computer Vision and/or Pattern Recognition to design the task-oriented methods. Thus, in order to improve the usability of these supporting systems, it will be useful to develop a method that can automatically transform the image features of general propose into the effective form toward the task of their interest. In this paper, we propose a semi-supervised feature transformation method, which is formulated as a natural coupling of principal component analysis (PCA and linear discriminant analysis (LDA in the framework of graph-embedding. Compared with other feature transformation methods, our method showed favorable classification performance in biological image analysis.

  18. Multiscale High-Level Feature Fusion for Histopathological Image Classification

    Directory of Open Access Journals (Sweden)

    ZhiFei Lai

    2017-01-01

    Full Text Available Histopathological image classification is one of the most important steps for disease diagnosis. We proposed a method for multiclass histopathological image classification based on deep convolutional neural network referred to as coding network. It can gain better representation for the histopathological image than only using coding network. The main process is that training a deep convolutional neural network is to extract high-level feature and fuse two convolutional layers’ high-level feature as multiscale high-level feature. In order to gain better performance and high efficiency, we would employ sparse autoencoder (SAE and principal components analysis (PCA to reduce the dimensionality of multiscale high-level feature. We evaluate the proposed method on a real histopathological image dataset. Our results suggest that the proposed method is effective and outperforms the coding network.

  19. Bag-of-visual-words based feature extraction for SAR target classification

    Science.gov (United States)

    Amrani, Moussa; Chaib, Souleyman; Omara, Ibrahim; Jiang, Feng

    2017-07-01

    Feature extraction plays a key role in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very crucial to choose appropriate features to train a classifier, which is prerequisite. Inspired by the great success of Bag-of-Visual-Words (BoVW), we address the problem of feature extraction by proposing a novel feature extraction method for SAR target classification. First, Gabor based features are adopted to extract features from the training SAR images. Second, a discriminative codebook is generated using K-means clustering algorithm. Third, after feature encoding by computing the closest Euclidian distance, the targets are represented by new robust bag of features. Finally, for target classification, support vector machine (SVM) is used as a baseline classifier. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) public release dataset are conducted, and the classification accuracy and time complexity results demonstrate that the proposed method outperforms the state-of-the-art methods.

  20. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  1. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  2. Solving jigsaw puzzles using image features

    DEFF Research Database (Denmark)

    Nielsen, Ture R.; Drewsen, Peter; Hansen, Klaus

    2008-01-01

    In this article, we describe a method for automatic solving of the jigsaw puzzle problem based on using image features instead of the shape of the pieces. The image features are used for obtaining an accurate measure for edge similarity to be used in a new edge matching algorithm. The algorithm...... algorithm which exploits the divide and conquer paradigm to reduce the combinatorially complex problem by classifying the puzzle pieces and comparing pieces drawn from the same group. The paper includes a brief preliminary investigation of some image features used in the classification....

  3. Featured Image: Identifying Weird Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2017-08-01

    Hoags Object, an example of a ring galaxy. [NASA/Hubble Heritage Team/Ray A. Lucas (STScI/AURA)]The above image (click for the full view) shows PanSTARRSobservationsof some of the 185 galaxies identified in a recent study as ring galaxies bizarre and rare irregular galaxies that exhibit stars and gas in a ring around a central nucleus. Ring galaxies could be formed in a number of ways; one theory is that some might form in a galaxy collision when a smaller galaxy punches through the center of a larger one, triggering star formation around the center. In a recent study, Ian Timmis and Lior Shamir of Lawrence Technological University in Michigan explore ways that we may be able to identify ring galaxies in the overwhelming number of images expected from large upcoming surveys. They develop a computer analysis method that automatically finds ring galaxy candidates based on their visual appearance, and they test their approach on the 3 million galaxy images from the first PanSTARRS data release. To see more of the remarkable galaxies the authors found and to learn more about their identification method, check out the paper below.CitationIan Timmis and Lior Shamir 2017 ApJS 231 2. doi:10.3847/1538-4365/aa78a3

  4. Feature Extraction From DNA Sequences by Multifractal Analysis

    National Research Council Canada - National Science Library

    Zhang, H

    2001-01-01

    This paper presents feature extraction and estimation of multifractal measures of DNA sequences using a multifractal methodology and demonstrates a new scheme for identifying biological functionality...

  5. A multi-approach feature extractions for iris recognition

    Science.gov (United States)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  6. 3D Feature Extraction for Unstructured Grids

    Science.gov (United States)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  7. An improved SOM algorithm and its application to color feature extraction.

    Science.gov (United States)

    Chen, Li-Ping; Liu, Yi-Guang; Huang, Zeng-Xi; Shi, Yong-Tao

    2014-01-01

    Reducing the redundancy of dominant color features in an image and meanwhile preserving the diversity and quality of extracted colors is of importance in many applications such as image analysis and compression. This paper presents an improved self-organization map (SOM) algorithm namely MFD-SOM and its application to color feature extraction from images. Different from the winner-take-all competitive principle held by conventional SOM algorithms, MFD-SOM prevents, to a certain degree, features of non-principal components in the training data from being weakened or lost in the learning process, which is conductive to preserving the diversity of extracted features. Besides, MFD-SOM adopts a new way to update weight vectors of neurons, which helps to reduce the redundancy in features extracted from the principal components. In addition, we apply a linear neighborhood function in the proposed algorithm aiming to improve its performance on color feature extraction. Experimental results of feature extraction on artificial datasets and benchmark image datasets demonstrate the characteristics of the MFD-SOM algorithm.

  8. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  9. Chiari III malformation: imaging features.

    Science.gov (United States)

    Castillo, M; Quencer, R M; Dominguez, R

    1992-01-01

    To analyze and discuss the MR and CT features of Chiari type III malformations. MR and CT studies in nine neonates born at term with Chiari type III malformations were retrospectively reviewed. High cervical/low occipital encephaloceles were present in all cases. Hypoplasia of the low and midline aspects of the parietal bones was seen in four patients. The encephaloceles contained varying amounts of brain (cerebellum and occipital lobes, six cases; cerebellum only, three cases), ventricles (fourth, six cases; lateral, three cases), cisterns, and in one case, the medulla and pons. Associated anomalies included: petrous and clivus scalloping (five cases/nine cases), cerebellar hemisphere overgrowth (two cases/nine cases), cerebellar tonsillar herniation (three cases/seven cases), deformed midbrain (nine cases), hydrocephalus (two cases/nine cases), dysgenesis of the corpus callosum (six cases/nine cases), posterior cervical vertebral agenesis (three cases/eight cases), and spinal cord syrinxes (two cases/seven cases). In four patients who underwent surgical resection and closure, aberrant deep draining veins and ectopic venous sinuses within the encephaloceles were found. Pathology examination of the encephalocele (four cases/nine cases) showed multiple anomalies (necrosis, gliosis, heterotopias, meningeal fibrosis) that were not demonstrable by either MR or CT. The marked disorganization of the tissues contained within the cephalocele may account for the lack of MR sensitivity to these abnormalities. Preoperative determination of the position of the medulla and pons is essential and is easily accomplished by MR. To avoid surgical complications, the high incidence of venous anomalies should be kept in mind.

  10. Handwritten Character Classification using the Hotspot Feature Extraction Technique

    NARCIS (Netherlands)

    Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2012-01-01

    Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it

  11. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    Science.gov (United States)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  12. MR imaging features of craniodiaphyseal dysplasia

    Energy Technology Data Exchange (ETDEWEB)

    Marden, Franklin A. [Mallinckrodt Institute of Radiology, Washington University Medical Center, 510 South Kingshighway Blvd., MO 63110, St. Louis (United States); Department of Radiology, St. Louis Children' s Hospital, Children' s Place, MO 63110, St. Louis (United States); Wippold, Franz J. [Mallinckrodt Institute of Radiology, Washington University Medical Center, 510 South Kingshighway Blvd., MO 63110, St. Louis (United States); Department of Radiology, St. Louis Children' s Hospital, Children' s Place, MO 63110, St. Louis (United States); Department of Radiology/Nuclear Medicine, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, MD 20814, Bethesda (United States)

    2004-02-01

    We report the magnetic resonance (MR) imaging findings in a 4-year-old girl with characteristic radiographic and computed tomography (CT) features of craniodiaphyseal dysplasia. MR imaging exquisitely depicted cranial nerve compression, small foramen magnum, hydrocephalus, and other intracranial complications of this syndrome. A syrinx of the cervical spinal cord was demonstrated. We suggest that MR imaging become a routine component of the evaluation of these patients. (orig.)

  13. Feature Detector and Descriptor for Medical Images

    Science.gov (United States)

    Sargent, Dusty; Chen, Chao-I.; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Daniel

    2009-02-01

    The ability to detect and match features across multiple views of a scene is a crucial first step in many computer vision algorithms for dynamic scene analysis. State-of-the-art methods such as SIFT and SURF perform successfully when applied to typical images taken by a digital camera or camcorder. However, these methods often fail to generate an acceptable number of features when applied to medical images, because such images usually contain large homogeneous regions with little color and intensity variation. As a result, tasks like image registration and 3D structure recovery become difficult or impossible in the medical domain. This paper presents a scale, rotation and color/illumination invariant feature detector and descriptor for medical applications. The method incorporates elements of SIFT and SURF while optimizing their performance on medical data. Based on experiments with various types of medical images, we combined, adjusted, and built on methods and parameter settings employed in both algorithms. An approximate Hessian based detector is used to locate scale invariant keypoints and a dominant orientation is assigned to each keypoint using a gradient orientation histogram, providing rotation invariance. Finally, keypoints are described with an orientation-normalized distribution of gradient responses at the assigned scale, and the feature vector is normalized for contrast invariance. Experiments show that the algorithm detects and matches far more features than SIFT and SURF on medical images, with similar error levels.

  14. Extended local binary pattern features for improving settlement type classification of quickbird images

    CSIR Research Space (South Africa)

    Mdakane, L

    2012-11-01

    Full Text Available Despite the fact that image texture features extracted from high-resolution remotely sensed images over urban areas have demonstrated their ability to distinguish different classes, they are still far from being ideal. Multiresolution grayscale...

  15. Wavelet based feature extraction and visualization in hyperspectral tissue characterization.

    Science.gov (United States)

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-12-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition.

  16. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  17. Source Assignment and Feature Extraction in Speech

    Science.gov (United States)

    Ades, Anthony E.

    1977-01-01

    Three experiments investigated the relationship in speech perception between the mechanisms that determine the source of speech sounds and those that analyze their actual acoustic contents and extract from them the acoustic cues to a sound's phonetic description. (Author/RK)

  18. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-01

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313

  19. Synthetic range profiling, ISAR imaging of sea vessels and feature extraction, using a multimode radar to classify targets: initial results from field trials

    CSIR Research Space (South Africa)

    Abdul Gaffar, MY

    2011-04-01

    Full Text Available -based classification of small to medium sized sea vessels in littoral condition. The experimental multimode radar is based on an experimental tracking radar that was modified to generate SRP and ISAR images in both search and tracking modes. The architecture...

  20. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of ...

  1. Feature extraction of arc tracking phenomenon

    Science.gov (United States)

    Attia, John Okyere

    1995-01-01

    This document outlines arc tracking signals -- both the data acquisition and signal processing. The objective is to obtain the salient features of the arc tracking phenomenon. As part of the signal processing, the power spectral density is obtained and used in a MATLAB program.

  2. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  3. Gully Features Extraction Using Remote Sensing Techniques ...

    African Journals Online (AJOL)

    Gullies are large and deep erosion depressions or channels normally occurring in drainage ways. They are spectrally heterogeneous, making them difficult to map using pixel based classification technique. The advancement of remote sensing in terms of Geographic Object Based Image Analysis (GEOBIA) provides new ...

  4. Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face Detection

    Directory of Open Access Journals (Sweden)

    Xiaojun Lu

    2017-01-01

    Full Text Available This paper proposes a method that uses feature fusion to represent images better for face detection after feature extraction by deep convolutional neural network (DCNN. First, with Clarifai net and VGG Net-D (16 layers, we learn features from data, respectively; then we fuse features extracted from the two nets. To obtain more compact feature representation and mitigate computation complexity, we reduce the dimension of the fused features by PCA. Finally, we conduct face classification by SVM classifier for binary classification. In particular, we exploit offset max-pooling to extract features with sliding window densely, which leads to better matches of faces and detection windows; thus the detection result is more accurate. Experimental results show that our method can detect faces with severe occlusion and large variations in pose and scale. In particular, our method achieves 89.24% recall rate on FDDB and 97.19% average precision on AFW.

  5. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  6. Universal Feature Extraction for Traffic Identification of the Target Category.

    Science.gov (United States)

    Shen, Jian; Xia, Jingbo; Dong, Shufu; Zhang, Xiaoyan; Fu, Kai

    2016-01-01

    Traffic identification of the target category is currently a significant challenge for network monitoring and management. To identify the target category with pertinence, a feature extraction algorithm based on the subset with highest proportion is presented in this paper. The method is proposed to be applied to the identification of any category that is assigned as the target one, but not restricted to certain specific category. We divide the process of feature extraction into two stages. In the stage of primary feature extraction, the feature subset is extracted from the dataset which has the highest proportion of the target category. In the stage of secondary feature extraction, the features that can distinguish the target and interfering categories are added to the feature subset. Our theoretical analysis and experimental observations reveal that the proposed algorithm is able to extract fewer features with greater identification ability of the target category. Moreover, the universality of the proposed algorithm proves to be available with the experiment that every category is set to be the target one.

  7. Histopathological Image Classification using Discriminative Feature-oriented Dictionary Learning

    Science.gov (United States)

    Vu, Tiep Huu; Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, UK Arvind

    2016-01-01

    In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available. PMID:26513781

  8. A Method of SAR Target Recognition Based on Gabor Filter and Local Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Wang Lu

    2015-12-01

    Full Text Available This paper presents a novel texture feature extraction method based on a Gabor filter and Three-Patch Local Binary Patterns (TPLBP for Synthetic Aperture Rader (SAR target recognition. First, SAR images are processed by a Gabor filter in different directions to enhance the significant features of the targets and their shadows. Then, the effective local texture features based on the Gabor filtered images are extracted by TPLBP. This not only overcomes the shortcoming of Local Binary Patterns (LBP, which cannot describe texture features for large scale neighborhoods, but also maintains the rotation invariant characteristic which alleviates the impact of the direction variations of SAR targets on recognition performance. Finally, we use an Extreme Learning Machine (ELM classifier and extract the texture features. The experimental results of MSTAR database demonstrate the effectiveness of the proposed method.

  9. Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

    Directory of Open Access Journals (Sweden)

    Xian-Hua Han

    2011-01-01

    extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  10. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  11. Angiomatous Meningioma: CT and MR Imaging Features

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Hee Yeon; Yu, In Kyu; Kim, Min Sun [Dept. of Radiology, Eulji University Hospital, Daejeon (Korea, Republic of); Kim, Seong Min; Kim, Han Kyu [Dept. of Neurosurgery, Eulji University Hospital, Daejeon (Korea, Republic of)

    2011-05-15

    To describe the computed tomography and magnetic resonance imaging features of angiomatous meningiomas. We reviewed the imaging findings of six patients with pathologically proven angiomatous meningiomas and characterized the location, margin, dura base, CT attenuation, MR signal intensity, intratumoral signal void, contrast enhancement, intratumoral cystic change, and peritumoral edema. Most tumors showed high signal intensity on T2-weighted images, and low signal intensity on diffusion-weighted images. After intravenous contrast administration, the tumor showed heterogeneous strong enhancement. Most tumors had a lobulated margin with prominent intratumoral signal voids. Four patients showed marked or small intratumoral cystic changes. Typically, angiomatous meningiomas were dura-based masses characterized by lobulated margins with high signal intensity on T2-weighted imaging (T2WI), low signal intensity on diffusion-weighted imaging (DWI), prominent intratumoral signal voids, intratumoral cystic changes, and marked enhancement after intravenous contrast administration.

  12. Extracting Conceptual Feature Structures from Text

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Lassen, Tine

    2011-01-01

    This paper describes an approach to indexing texts by their conceptual content using ontologies along with lexico-syntactic information and semantic role assignment provided by lexical resources. The conceptual content of meaningful chunks of text is transformed into conceptual feature structures...... and mapped into concepts in a generative ontology. Synonymous but linguistically quite distinct expressions are mapped to the same concept in the ontology. This allows us to perform a content-based search which will retrieve relevant documents independently of the linguistic form of the query as well...

  13. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  14. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  15. Automated Feature Extraction from Hyperspectral Imagery, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  16. Fingerprint Identification - Feature Extraction, Matching and Database Search

    NARCIS (Netherlands)

    Bazen, A.M.

    2002-01-01

    Presents an overview of state-of-the-art fingerprint recognition technology for identification and verification purposes. Three principal challenges in fingerprint recognition are identified: extracting robust features from low-quality fingerprints, matching elastically deformed fingerprints and

  17. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  18. Imaging features of unusual intracranial cystic meningiomas

    International Nuclear Information System (INIS)

    Demir, M.K.; Musluman, M.; Kilicoglu, G.; Hakan, T.; Aker, F.V.

    2007-01-01

    To describe the imaging features of unusual intracranial cystic meningiomas in infants and adults. We retrospectively reviewed the magnetic resonance and computed tomography findings for 2 female patients and 3 male patients, ranging in age from 1 to 73 years (median 41 years), with histopathologically proven cystic meningioma. Although cystic meningiomas usually appear as solid and cystic masses, they may present as a mainly multicystic lesion. The wall of a cystic part of the meningioma may include both enhancing and unenhancing areas at imaging. The cystic portion of a meningioma is hypointense on diffusion-weighted images and markedly hyperintense on corresponding apparent diffusion coefficient maps. (author)

  19. Feature extraction applied to agricultural crops as seen by LANDSAT

    Science.gov (United States)

    Kauth, R. J.; Lambeck, P. F.; Richardson, W.; Thomas, G. S.; Pentland, A. P. (Principal Investigator)

    1979-01-01

    The physical interpretation of the spectral-temporal structure of LANDSAT data can be conveniently described in terms of a graphic descriptive model called the Tassled Cap. This model has been a source of development not only in crop-related feature extraction, but also for data screening and for haze effects correction. Following its qualitative description and an indication of its applications, the model is used to analyze several feature extraction algorithms.

  20. Imaging features of benign adrenal cysts

    International Nuclear Information System (INIS)

    Sanal, Hatice Tuba; Kocaoglu, Murat; Yildirim, Duzgun; Bulakbasi, Nail; Guvenc, Inanc; Tayfun, Cem; Ucoz, Taner

    2006-01-01

    Benign adrenal gland cysts (BACs) are rare lesions with a variable histological spectrum and may mimic not only each other but also malignant ones. We aimed to review imaging features of BACs which can be helpful in distinguishing each entity and determining the subsequent appropriate management

  1. Deep feature extraction and combination for synthetic aperture radar target classification

    Science.gov (United States)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  2. Remote Sensing Image Registration Using Multiple Image Features

    Directory of Open Access Journals (Sweden)

    Kun Yang

    2017-06-01

    Full Text Available Remote sensing image registration plays an important role in military and civilian fields, such as natural disaster damage assessment, military damage assessment and ground targets identification, etc. However, due to the ground relief variations and imaging viewpoint changes, non-rigid geometric distortion occurs between remote sensing images with different viewpoint, which further increases the difficulty of remote sensing image registration. To address the problem, we propose a multi-viewpoint remote sensing image registration method which contains the following contributions. (i A multiple features based finite mixture model is constructed for dealing with different types of image features. (ii Three features are combined and substituted into the mixture model to form a feature complementation, i.e., the Euclidean distance and shape context are used to measure the similarity of geometric structure, and the SIFT (scale-invariant feature transform distance which is endowed with the intensity information is used to measure the scale space extrema. (iii To prevent the ill-posed problem, a geometric constraint term is introduced into the L2E-based energy function for better behaving the non-rigid transformation. We evaluated the performances of the proposed method by three series of remote sensing images obtained from the unmanned aerial vehicle (UAV and Google Earth, and compared with five state-of-the-art methods where our method shows the best alignments in most cases.

  3. Magnetic resonance imaging features of allografts

    International Nuclear Information System (INIS)

    Kattapuram, S.V.; Rosol, M.S.; Rosenthal, D.I.; Palmer, W.E.; Mankin, H.J.

    1999-01-01

    Objective. To investigate the magnetic resonance imaging (MRI) features of allografts at various time intervals after surgery in patients with osteoarticular allografts.Design and patients. Sixteen patients who were treated with osteoarticular allografts and who were followed over time with MRI studies as part of their long-term follow-up were retrospectively selected for this study. T1-weighted images were obtained both before and after gadolinium administration along with T2-weighted images. All images were reviewed by an experienced musculoseletal radiologist, with two other experienced radiologists used for consultation. Imaging studies were organized into three groups for ease of discussion: early postoperative period (2 days to 2 months), intermediate postoperative period (3 months to 2 years), and late postoperative period (greater than 2 years).Results. In the early postoperative period, no gadolinium enhancement of the allograft was visible in any of the MR images. A linear, thin layer of periosteal and endosteal tissue enhancement along the margin of the allograft was visible in images obtained at 3-4 months. This enhancement apeared gradually to increase in images from later periods, and appears to have stabilized in the images obtained approximately 2-3 years after allograft placement. The endosteal enhancement diminished after several years, with examinations conducted between 6 and 8 years following surgery showing minimal endosteal enhancement. However, focal enhancement was noted adjacent to areas of pressure erosion or degenerative cysts. All the cases showed inhomogeneity in the marrow signal (scattered low signal foci on T1 with corresponding bright signal on T2), and a diffuse, inhomogeneous marrow enhancement later on.Conclusion. We have characterized the basic MRI features of osteoarticular allografts in 16 patients who underwent imaging studies at various time points as part of routine follow-up. We believe that the endosteal and periosteal

  4. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  5. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  6. Statistical Feature Extraction and Recognition of Beverages Using Electronic Tongue

    Directory of Open Access Journals (Sweden)

    P. C. PANCHARIYA

    2010-01-01

    Full Text Available This paper describes an approach for extraction of features from data generated from an electronic tongue based on large amplitude pulse voltammetry. In this approach statistical features of the meaningful selected variables from current response signals are extracted and used for recognition of beverage samples. The proposed feature extraction approach not only reduces the computational complexity but also reduces the computation time and requirement of storage of data for the development of E-tongue for field applications. With the reduced information, a probabilistic neural network (PNN was trained for qualitative analysis of different beverages. Before the qualitative analysis of the beverages, the methodology has been tested for the basic artificial taste solutions i.e. sweet, sour, salt, bitter, and umami. The proposed procedure was compared with the more conventional and linear feature extraction technique employing principal component analysis combined with PNN. Using the extracted feature vectors, highly correct classification by PNN was achieved for eight types of juices and six types of soft drinks. The results indicated that the electronic tongue based on large amplitude pulse voltammetry with reduced feature was capable of discriminating not only basic artificial taste solutions but also the various sorts of the same type of natural beverages (fruit juices, vegetable juices, soft drinks, etc..

  7. Multispectral image fusion based on fractal features

    Science.gov (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  8. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    Directory of Open Access Journals (Sweden)

    Gang Hu

    2018-01-01

    Full Text Available The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.

  9. Barrett's esophagus: clinical features, obesity, and imaging.

    LENUS (Irish Health Repository)

    Quigley, Eamonn M M

    2011-09-01

    The following includes commentaries on clinical features and imaging of Barrett\\'s esophagus (BE); the clinical factors that influence the development of BE; the influence of body fat distribution and central obesity; the role of adipocytokines and proinflammatory markers in carcinogenesis; the role of body mass index (BMI) in healing of Barrett\\'s epithelium; the role of surgery in prevention of carcinogenesis in BE; the importance of double-contrast esophagography and cross-sectional images of the esophagus; and the value of positron emission tomography\\/computed tomography.

  10. Pattern representation in feature extraction and classifier design: matrix versus vector.

    Science.gov (United States)

    Wang, Zhe; Chen, Songcan; Liu, Jun; Zhang, Daoqiang

    2008-05-01

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern- oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems.

  11. A review of road extraction from remote sensing images

    Directory of Open Access Journals (Sweden)

    Weixing Wang

    2016-06-01

    Full Text Available As a significant role for traffic management, city planning, road monitoring, GPS navigation and map updating, the technology of road extraction from a remote sensing (RS image has been a hot research topic in recent years. In this paper, after analyzing different road features and road models, the road extraction methods were classified into the classification-based methods, knowledge-based methods, mathematical morphology, active contour model, and dynamic programming. Firstly, the road features, road model, existing difficulties and interference factors for road extraction were analyzed. Secondly, the principle of road extraction, the advantages and disadvantages of various methods and research achievements were briefly highlighted. Then, the comparisons of the different road extraction algorithms were performed, including road features, test samples and shortcomings. Finally, the research results in recent years were summarized emphatically. It is obvious that only using one kind of road features is hard to get an excellent extraction effect. Hence, in order to get good results, the road extraction should combine multiple methods according to the real applications. In the future, how to realize the complete road extraction from a RS image is still an essential but challenging and important research topic.

  12. Featured Image: Diamonds in a Meteorite

    Science.gov (United States)

    Kohler, Susanna

    2018-04-01

    This unique image which measures only 60 x 80 micrometers across reveals details in the Kapoeta meteorite, an 11-kg stone that fell in South Sudan in 1942. The sparkle in the image? A cluster of nanodiamonds discovered embedded in the stone in a recent study led by Yassir Abdu (University of Sharjah, United Arab Emirates). Abdu and collaborators showed that these nanodiamonds have similar spectral features to the interiors of dense interstellar clouds and they dont show any signs of shock features. This may suggest that the nanodiamonds were formed by condensation of nebular gases early in the history of the solar system. The diamonds were trapped in the surface material of the Kapoeta meteorites parent body, thought to be the asteroid Vesta. To read more about the authors study, check out the original article below.CitationYassir A. Abdu et al 2018 ApJL 856 L9. doi:10.3847/2041-8213/aab433

  13. Special feature on imaging systems and techniques

    Science.gov (United States)

    Yang, Wuqiang; Giakos, George

    2013-07-01

    The IEEE International Conference on Imaging Systems and Techniques (IST'2012) was held in Manchester, UK, on 16-17 July 2012. The participants came from 26 countries or regions: Austria, Brazil, Canada, China, Denmark, France, Germany, Greece, India, Iran, Iraq, Italy, Japan, Korea, Latvia, Malaysia, Norway, Poland, Portugal, Sweden, Switzerland, Taiwan, Tunisia, UAE, UK and USA. The technical program of the conference consisted of a series of scientific and technical sessions, exploring physical principles, engineering and applications of new imaging systems and techniques, as reflected by the diversity of the submitted papers. Following a rigorous review process, a total of 123 papers were accepted, and they were organized into 30 oral presentation sessions and a poster session. In addition, six invited keynotes were arranged. The conference not only provided the participants with a unique opportunity to exchange ideas and disseminate research outcomes but also paved a way to establish global collaboration. Following the IST'2012, a total of 55 papers, which were technically extended substantially from their versions in the conference proceeding, were submitted as regular papers to this special feature of Measurement Science and Technology . Following a rigorous reviewing process, 25 papers have been finally accepted for publication in this special feature and they are organized into three categories: (1) industrial tomography, (2) imaging systems and techniques and (3) image processing. These papers not only present the latest developments in the field of imaging systems and techniques but also offer potential solutions to existing problems. We hope that this special feature provides a good reference for researchers who are active in the field and will serve as a catalyst to trigger further research. It has been our great pleasure to be the guest editors of this special feature. We would like to thank the authors for their contributions, without which it would

  14. Feature Evaluation for Building Facade Images - AN Empirical Study

    Science.gov (United States)

    Yang, M. Y.; Förstner, W.; Chai, D.

    2012-08-01

    The classification of building facade images is a challenging problem that receives a great deal of attention in the photogrammetry community. Image classification is critically dependent on the features. In this paper, we perform an empirical feature evaluation task for building facade images. Feature sets we choose are basic features, color features, histogram features, Peucker features, texture features, and SIFT features. We present an approach for region-wise labeling using an efficient randomized decision forest classifier and local features. We conduct our experiments with building facade image classification on the eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.

  15. Hybrid feature vector extraction in unsupervised learning neural classifier.

    Science.gov (United States)

    Kostka, P S; Tkacz, E J; Komorowski, D

    2005-01-01

    Feature extraction and selection method as a preliminary stage of heart rate variability (HRV) signals unsupervised learning neural classifier is presented. Multi-domain, mixed new feature vector is created from time, frequency and time-frequency parameters of HRV analysis. The optimal feature set for given classification task was chosen as a result of feature ranking, obtained after computing the class separability measure for every independent feature. Such prepared a new signal representation in reduced feature space is the input to neural classifier based on introduced by Grosberg Adaptive Resonance Theory (ART2) structure. Test of proposed method carried out on the base of 62 patients with coronary artery disease divided into learning and verifying set allowed to chose these features, which gave the best results. Classifier performance measures obtained for unsupervised learning ART2 neural network was comparable with these reached for multiplayer perceptron structures.

  16. An expert botanical feature extraction technique based on phenetic features for identifying plant species.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.

  17. An expert botanical feature extraction technique based on phenetic features for identifying plant species.

    Science.gov (United States)

    Kolivand, Hoshang; Fern, Bong Mei; Rahim, Mohd Shafry Mohd; Sulong, Ghazali; Baker, Thar; Tully, David

    2018-01-01

    In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.

  18. An expert botanical feature extraction technique based on phenetic features for identifying plant species

    Science.gov (United States)

    Fern, Bong Mei; Rahim, Mohd Shafry Mohd; Sulong, Ghazali; Baker, Thar; Tully, David

    2018-01-01

    In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost. PMID:29420568

  19. Supervised non-negative tensor factorization for automatic hyperspectral feature extraction and target discrimination

    Science.gov (United States)

    Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael

    2017-05-01

    Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.

  20. Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum

    Science.gov (United States)

    Guan, Shan; Song, Weijie; Pang, Hongyang

    2017-09-01

    In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.

  1. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  2. Determination of the Image Complexity Feature in Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Veacheslav L. Perju

    2003-11-01

    Full Text Available The new image complexity informative feature is proposed. The experimental estimation of the image complexity is carried out. There are elaborated two optical-electronic processors for image complexity calculation. The determination of the necessary number of the image's digitization elements depending on the image complexity was carried out. The accuracy of the image complexity feature calculation was made.

  3. A Fourier-based textural feature extraction procedure

    Science.gov (United States)

    Stromberg, W. D.; Farr, T. G.

    1986-01-01

    A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.

  4. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please

  5. MR imaging features of hemispherical spondylosclerosis

    Energy Technology Data Exchange (ETDEWEB)

    Vicentini, Joao R.T.; Martinez-Salazar, Edgar L.; Chang, Connie Y.; Bredella, Miriam A.; Rosenthal, Daniel I.; Torriani, Martin [Massachusetts General Hospital and Harvard Medical School, Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Boston, MA (United States)

    2017-10-15

    Hemispherical spondylosclerosis (HS) is a rare degenerative entity characterized by dome-shaped sclerosis of a single vertebral body that may pose a diagnostic dilemma. The goal of this study was to describe the MR imaging features of HS. We identified spine radiographs and CT examinations of subjects with HS who also had MR imaging for correlation. Two musculoskeletal radiologists independently assessed sclerosis characteristics, presence of endplate erosions, marrow signal intensity, and disk degeneration (Pfirrmann scale). We identified 11 subjects (six males, five females, mean 48 ± 10 years) with radiographic/CT findings of HS. The most commonly affected vertebral body was L4 (6/11; 55%). On MR imaging, variable signal intensity was noted, being most commonly low on T1 (8/11, 73%) and high on fat-suppressed T2-weighted (8/11, 73%) images. In two subjects, diffuse post-contrast enhancement was seen in the lesion. Moderate disk degeneration and endplate bone erosions adjacent to sclerosis were present in all subjects. Erosions of the opposite endplate were present in two subjects (2/11, 18%). CT data from nine subjects showed the mean attenuation value of HS was 472 ± 96 HU. HS appearance on MR imaging is variable and may not correlate with the degree of sclerosis seen on radiographs or CT. Disk degenerative changes and asymmetric endplate erosions are consistent markers of HS. (orig.)

  6. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  7. Bottom-of-sulcus dysplasia: imaging features.

    Science.gov (United States)

    Hofman, Paul A M; Fitt, Gregory J; Harvey, A Simon; Kuzniecky, Ruben I; Jackson, Graeme

    2011-04-01

    Dysplasia at the bottom of a sulcus is a subtle but distinct malformation of cortical development relevant to epilepsy. The purpose of this study was to review the imaging features important to the clinical diagnosis of this lesion. All cases recognized as typical bottom-of-sulcus dysplasia in our comprehensive epilepsy program over the period 2002-2007 were included in the study. In the 20 cases recognized, three major features were identified: cortical thickening at the bottom of a sulcus; a funnel-shaped extension of the lesion toward the ventricular surface, commonly with abnormal signal intensity; and an abnormal gyral pattern related to the bottom-of-sulcus dysplasia, sometimes with a puckered appearance. The pathologic features of the resected lesions were typical of focal cortical dysplasia. Bottom-of-sulcus dysplasia is a distinctive malformation of cortical development that can be diagnosed on the basis of imaging characteristics. Reliable identification of this type of malformation of cortical development is difficult but clinically important because the lesion appears to be highly epileptogenic and because the prognosis for seizure control is excellent after focal resection.

  8. Mass-like extramedullary hematopoiesis: imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Ginzel, Andrew W. [Synergy Radiology Associates, Houston, TX (United States); Kransdorf, Mark J.; Peterson, Jeffrey J.; Garner, Hillary W. [Mayo Clinic, Department of Radiology, Jacksonville, FL (United States); Murphey, Mark D. [American Institute for Radiologic Pathology, Silver Spring, MD (United States)

    2012-08-15

    To report the imaging appearances of mass-like extramedullary hematopoiesis (EMH), to identify those features that are sufficiently characteristic to allow a confident diagnosis, and to recognize the clinical conditions associated with EMH and the relative incidence of mass-like disease. We retrospectively identified 44 patients with EMH; 12 of which (27%) had focal mass-like lesions and formed the study group. The study group consisted of 6 male and 6 female subjects with a mean age of 58 years (range 13-80 years). All 12 patients underwent CT imaging and 3 of the 12 patients had undergone additional MR imaging. The imaging characteristics of the extramedullary hematopoiesis lesions in the study group were analyzed and recorded. The patient's clinical presentation, including any condition associated with extramedullary hematopoiesis, was also recorded. Ten of the 12 (83%) patients had one or more masses located along the axial skeleton. Of the 10 patients with axial masses, 9 (90%) had multiple masses and 7 (70%) demonstrated internal fat. Eight patients (80%) had paraspinal masses and 4 patients (40%) had presacral masses. Seven patients (70%) had splenomegaly. Eleven of the 12 patients had a clinical history available for review. A predisposing condition for extramedullary hematopoiesis was present in 10 patients and included various anemias (5 cases; 45%), myelofibrosis/myelodysplastic syndrome (4 cases; 36%), and marrow proliferative disorder (1 case; 9%). One patient had no known predisposing condition. Mass-like extramedullary hematopoiesis most commonly presents as multiple, fat-containing lesions localized to the axial skeleton. When these imaging features are identified, extramedullary hematopoiesis should be strongly considered, particularly when occurring in the setting of a predisposing medical condition. (orig.)

  9. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  10. Low-Level Color and Texture Feature Extraction of Coral Reef Components

    Directory of Open Access Journals (Sweden)

    Ma. Sheila Angeli Marcos

    2003-06-01

    Full Text Available The purpose of this study is to develop a computer-based classifier that automates coral reef assessmentfrom digitized underwater video. We extract low-level color and texture features from coral images toserve as input to a high-level classifier. Low-level features for color were labeled blue, green, yellow/brown/orange, and gray/white, which are described by the normalized chromaticity histograms of thesemajor colors. The color matching capability of these features was determined through a technique called“Histogram Backprojection”. The low-level texture feature marks a region as coarse or fine dependingon the gray-level variance of the region.

  11. Extended feature-fusion guidelines to improve image-based multi-modal biometrics

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-09-01

    Full Text Available be used to help align the global features. These features can also be extracted from palmprints as they share many characteristics of the fingerprint. Facial texture patterns consist of global contour and pore features. Local features, known as facial... to classify different biometric modalities. Global and local features often require algorithms to im- prove their clarity and consistency over multiple samples. This is particularly the case with contours and pores in face images, principal lines in palmprint...

  12. Imaging features of foot osteoid osteoma

    Energy Technology Data Exchange (ETDEWEB)

    Shukla, Satyen; Clarke, Andrew W.; Saifuddin, Asif [Royal National Orthopaedic Hospital NHS Trust, Department of Radiology, Stanmore, Middlesex (United Kingdom)

    2010-07-15

    We performed a retrospective review of the imaging of nine patients with a diagnosis of foot osteoid osteoma (OO). Radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) had been performed in all patients. Radiographic features evaluated were the identification of a nidus and cortical thickening. CT features noted were nidus location (affected bone - intramedullary, intracortical, subarticular) and nidus calcification. MRI features noted were the presence of an identifiable nidus, presence and grade of bone oedema and whether a joint effusion was identified. Of the nine patients, three were female and six male, with a mean age of 21 years (range 11-39 years). Classical symptoms of OO (night pain, relief with aspirin) were identified in five of eight (62.5%) cases (in one case, the medical records could not be retrieved). In five patients the lesion was located in the hindfoot (four calcaneus, one talus), while four were in the mid- or forefoot (two metatarsal and two phalangeal). Radiographs were normal in all patients with hindfoot OO. CT identified the nidus in all cases (89%) except one terminal phalanx lesion, while MRI demonstrated a nidus in six of nine cases (67%). The nidus was of predominantly intermediate signal intensity on T1-weighted (T1W) sequences, with intermediate to high signal intensity on T2-weighted (T2W) sequences. High-grade bone marrow oedema, limited to the affected bone and adjacent soft tissue oedema was identified in all cases. In a young patient with chronic hindfoot pain and a normal radiograph, MRI features suggestive of possible OO include extensive bone marrow oedema limited to one bone, with a possible nidus demonstrated in two-thirds of cases. The presence or absence of a nidus should be confirmed with high-resolution CT. (orig.)

  13. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  14. Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.

    Science.gov (United States)

    Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae

    2017-01-01

    Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.

  15. Feature-aided multiple target tracking in the image plane

    Science.gov (United States)

    Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.

    2006-05-01

    Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.

  16. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  17. Feature extraction using regular expression in detecting proper ...

    African Journals Online (AJOL)

    Feature extraction using regular expression in detecting proper noun for Malay news articles based on KNN algorithm. S Sulaiman, R.A. Wahid, F Morsidi. Abstract. No Abstract. Keywords: data mining; named entity recognition; regular expression; natural language processing. Full Text: EMAIL FREE FULL TEXT EMAIL ...

  18. Feature extraction and sensor selection for NPP initiating event identification

    International Nuclear Information System (INIS)

    Lin, Ting-Han; Wu, Shun-Chi; Chen, Kuang-You; Chou, Hwai-Pwu

    2017-01-01

    Highlights: • A two-stage feature extraction scheme for NPP initiating event identification. • With stBP, interrelations among the sensors can be retained for identification. • With dSFS, sensors that are crucial for identification can be efficiently selected. • Efficacy of the scheme is illustrated with data from the Maanshan NPP simulator. - Abstract: Initiating event identification is essential in managing nuclear power plant (NPP) severe accidents. In this paper, a novel two-stage feature extraction scheme that incorporates the proposed sensor type-wise block projection (stBP) and deflatable sequential forward selection (dSFS) is used to elicit the discriminant information in the data obtained from various NPP sensors to facilitate event identification. With the stBP, the primal features can be extracted without eliminating the interrelations among the sensors of the same type. The extracted features are then subjected to a further dimensionality reduction by selecting the sensors that are most relevant to the events under consideration. This selection is not easy, and a combinatorial optimization technique is normally required. With the dSFS, an optimal sensor set can be found with less computational load. Moreover, its sensor deflation stage allows sensors in the preselected set to be iteratively refined to avoid being trapped into a local optimum. Results from detailed experiments containing data of 12 event categories and a total of 112 events generated with a Taiwan’s Maanshan NPP simulator are presented to illustrate the efficacy of the proposed scheme.

  19. Towards Home-Made Dictionaries for Musical Feature Extraction

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2003-01-01

    arguably unnecessary limitations on the ability of the transform to extract and identify features. However, replacing the nicely structured dictionary of the Fourier transform (or indeed other nice transform such as the wavelet transform) with a home-made dictionary is a dangerous task, since even the most...

  20. Image segmentation by background extraction refinements

    Science.gov (United States)

    Rodriguez, Arturo A.; Mitchell, O. Robert

    1990-01-01

    An image segmentation method refining background extraction in two phases is presented. In the first phase, the method detects homogeneous-background blocks and estimates the local background to be extracted throughout the image. A block is classified homogeneous if its left and right standard deviations are small. The second phase of the method refines background extraction in nonhomogeneous blocks by recomputing the shoulder thresholds. Rules that predict the final background extraction are derived by observing the behavior of successive background statistical measurements in the regions under the presence of dark and/or bright object pixels. Good results are shown for a number of outdoor scenes.

  1. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  2. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  3. Imaging features of intracranial solitary fibrous tumors

    International Nuclear Information System (INIS)

    Yu Shuilian; Man Yuping; Ma Longbai; Liu Ying; Wei Qiang; Zhu Youkai

    2012-01-01

    Objective: To summarize the imaging features of intracranial solitary fibrous tumors (ISFT). Methods: Ten patients with ISFT proven histopathologically were collected. Four cases had CT data and all cases had MR data. The imaging features and pathological results were retrospectively analyzed. Results: All cases were misdiagnosed as meningioma at pre-operation. All lesions arose from intracranial meninges including 5 lesions above the tentorium, 4 lesions beneath the tentorium and 1 lesion growing around the tentorium. The margins of all the masses were well defined, and 8 lesions presented multilobular shape. CT demonstrated hyerattenuated masses in all 4 lesions, smooth erosion of the basicranial skull in 1 lesion, and punctiform calcification of the capsule in 1 lesion. T 1 WI showed most lesions with isointense or slight hyperintense signals including homogeneous in 4 lesions and heterogeneous in 6 lesions. T 2 WI demonstrated isointense or slight hyperintense in 2 lesions, mixed hypointense and hyperintense signals in 4, cystic portion in 2, and two distinct portion of hyperintense and hypointense signal, so called 'yin-yang' pattern, in 2. Strong enhanced was found in all lesions, especially in 8 lesion with heterogeneous with the low T 2 signal. 'Dural tail' was found in 4 lesions. Conclusions: ISFI has some specific CT and MR features including heterogeneous signal intensity on T 2 WI, strong enhancement of areas with low T 2 signal intensity, slight or no 'dural tail', without skull thickening, and the typical 'yin-yang' pattern. (authors)

  4. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  5. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  6. An edge extraction technique for noisy images

    International Nuclear Information System (INIS)

    Cios, K.J.; Sarieh, A.

    1990-01-01

    We present an algorithm for extracting edges from noisy images. Our method uses an unsupervised learning approach for local threshold computation by means of Pearson's method for mixture density identification. We tested the technique by applying it to computer-generated images corrupted with artificial noise and to an actual Thallium-201 heart image and it is shown that the technique has potential use for noisy images

  7. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  8. Body image features and interpersonal strategies in adolescents

    OpenAIRE

    Fadieieva Kseniia-Marharyta Olegivna

    2015-01-01

    This article describes the phenomenon of teenagers physical socialization in the context of extracurricular activities. The features of body image of dance, sports, travel and psychological courses participants. This article highlights typical stylistic features of teens body image.

  9. Magnetic Resonance Imaging Features of Neuromyelitis Optica

    International Nuclear Information System (INIS)

    You, Sun Kyung; Song, Chang June; Park, Woon Ju; Lee, In Ho; Son, Eun Hee

    2013-01-01

    To report the magnetic resonance (MR) imaging features of the spinal cord and brain in patients of neuromyelitis optica (NMO). Between January 2001 and March 2010, the MR images (spinal cord, brain, and orbit) and the clinical and serologic findings of 11 NMO patients were retrospectively reviewed. The contrast-enhancement of the spinal cord was performed (20/23). The presence and pattern of the contrast-enhancement in the spinal cord were classified into 5 types. Acute myelitis was monophasic in 8 patients (8/11, 72.7%); and optic neuritis preceded acute myelitis in most patients. Longitudinally extensive cord lesion (average, 7.3 vertebral segments) was involved. The most common type was the diffuse and subtle enhancement of the spinal cord with a multifocal nodular, linear or segmental intense enhancement (45%). Most of the brain lesions (5/11, 10 lesions) were located in the brain stem, thalamus and callososeptal interphase. Anti-Ro autoantibody was positive in 2 patients, and they showed a high relapse rate of acute myelitis. Anti-NMO IgG was positive in 4 patients (4/7, 66.7%). The imaging findings of acute myelitis in NMO may helpful in making an early diagnosis of NMO which can result in a severe damage to the spinal cord, and to make a differential diagnosis of multiple sclerosis and inflammatory diseases of the spinal cord such as toxocariasis.

  10. Magnetic Resonance Imaging Features of Neuromyelitis Optica

    Energy Technology Data Exchange (ETDEWEB)

    You, Sun Kyung; Song, Chang June; Park, Woon Ju; Lee, In Ho; Son, Eun Hee [Chungnam National University College of Medicine, Chungnam National University Hospital, Daejeon (Korea, Republic of)

    2013-03-15

    To report the magnetic resonance (MR) imaging features of the spinal cord and brain in patients of neuromyelitis optica (NMO). Between January 2001 and March 2010, the MR images (spinal cord, brain, and orbit) and the clinical and serologic findings of 11 NMO patients were retrospectively reviewed. The contrast-enhancement of the spinal cord was performed (20/23). The presence and pattern of the contrast-enhancement in the spinal cord were classified into 5 types. Acute myelitis was monophasic in 8 patients (8/11, 72.7%); and optic neuritis preceded acute myelitis in most patients. Longitudinally extensive cord lesion (average, 7.3 vertebral segments) was involved. The most common type was the diffuse and subtle enhancement of the spinal cord with a multifocal nodular, linear or segmental intense enhancement (45%). Most of the brain lesions (5/11, 10 lesions) were located in the brain stem, thalamus and callososeptal interphase. Anti-Ro autoantibody was positive in 2 patients, and they showed a high relapse rate of acute myelitis. Anti-NMO IgG was positive in 4 patients (4/7, 66.7%). The imaging findings of acute myelitis in NMO may helpful in making an early diagnosis of NMO which can result in a severe damage to the spinal cord, and to make a differential diagnosis of multiple sclerosis and inflammatory diseases of the spinal cord such as toxocariasis.

  11. Forged Signature Distinction Using Convolutional Neural Network for Feature Extraction

    Directory of Open Access Journals (Sweden)

    Seungsoo Nam

    2018-01-01

    Full Text Available This paper proposes a dynamic verification scheme for finger-drawn signatures in smartphones. As a dynamic feature, the movement of a smartphone is recorded with accelerometer sensors in the smartphone, in addition to the moving coordinates of the signature. To extract high-level longitudinal and topological features, the proposed scheme uses a convolution neural network (CNN for feature extraction, and not as a conventional classifier. We assume that a CNN trained with forged signatures can extract effective features (called S-vector, which are common in forging activities such as hesitation and delay before drawing the complicated part. The proposed scheme also exploits an autoencoder (AE as a classifier, and the S-vector is used as the input vector to the AE. An AE has high accuracy for the one-class distinction problem such as signature verification, and is also greatly dependent on the accuracy of input data. S-vector is valuable as the input of AE, and, consequently, could lead to improved verification accuracy especially for distinguishing forged signatures. Compared to the previous work, i.e., the MLP-based finger-drawn signature verification scheme, the proposed scheme decreases the equal error rate by 13.7%, specifically, from 18.1% to 4.4%, for discriminating forged signatures.

  12. Micro-Doppler Feature Extraction and Recognition Based on Netted Radar for Ballistic Targets

    Directory of Open Access Journals (Sweden)

    Feng Cun-qian

    2015-12-01

    Full Text Available This study examines the complexities of using netted radar to recognize and resolve ballistic midcourse targets. The application of micro-motion feature extraction to ballistic mid-course targets is analyzed, and the current status of application and research on micro-motion feature recognition is concluded for singlefunction radar networks such as low- and high-resolution imaging radar networks. Advantages and disadvantages of these networks are discussed with respect to target recognition. Hybrid-mode radar networks combine low- and high-resolution imaging radar and provide a specific reference frequency that is the basis for ballistic target recognition. Main research trends are discussed for hybrid-mode networks that apply micromotion feature extraction to ballistic mid-course targets.

  13. Extraction of Coal and Gangue Geometric Features with Multifractal Detrending Fluctuation Analysis

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2018-03-01

    Full Text Available The separation of coal and gangue is an important process of the coal preparation technology. The conventional way of manual selection and separation of gangue from the raw coal can be replaced by computer vision technology. In the literature, research on image recognition and classification of coal and gangue is mainly based on the grayscale and texture features of the coal and gangue. However, there are few studies on characteristics of coal and gangue from the perspective of their outline differences. Therefore, the multifractal detrended fluctuation analysis (MFDFA method is introduced in this paper to extract the geometric features of coal and gangue. Firstly, the outline curves of coal and gangue in polar coordinates are detected and achieved along the centroid, thereby the multifractal characteristics of the series are analyzed and compared. Subsequently, the modified local singular spectrum widths Δ h of the outline curve series are extracted as the characteristic variables of the coal and gangue for pattern recognition. Finally, the extracted geometric features by MFDFA combined with the grayscale and texture features of the images are compared with other methods, indicating that the recognition rate of coal gangue images can be increased by introducing the geometric features.

  14. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  15. 3D FEATURE POINT EXTRACTION FROM LIDAR DATA USING A NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Y. Feng

    2016-06-01

    Full Text Available Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  16. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  17. Optimized Feature Extraction for Temperature-Modulated Gas Sensors

    Directory of Open Access Journals (Sweden)

    Alexander Vergara

    2009-01-01

    Full Text Available One of the most serious limitations to the practical utilization of solid-state gas sensors is the drift of their signal. Even if drift is rooted in the chemical and physical processes occurring in the sensor, improved signal processing is generally considered as a methodology to increase sensors stability. Several studies evidenced the augmented stability of time variable signals elicited by the modulation of either the gas concentration or the operating temperature. Furthermore, when time-variable signals are used, the extraction of features can be accomplished in shorter time with respect to the time necessary to calculate the usual features defined in steady-state conditions. In this paper, we discuss the stability properties of distinct dynamic features using an array of metal oxide semiconductors gas sensors whose working temperature is modulated with optimized multisinusoidal signals. Experiments were aimed at measuring the dispersion of sensors features in repeated sequences of a limited number of experimental conditions. Results evidenced that the features extracted during the temperature modulation reduce the multidimensional data dispersion among repeated measurements. In particular, the Energy Signal Vector provided an almost constant classification rate along the time with respect to the temperature modulation.

  18. Cluster based statistical feature extraction method for automatic bleeding detection in wireless capsule endoscopy video.

    Science.gov (United States)

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A; Zhu, Wei-Ping; Ahmad, M Omair

    2018-03-01

    Wireless capsule endoscopy (WCE) is capable of demonstrating the entire gastrointestinal tract at an expense of exhaustive reviewing process for detecting bleeding disorders. The main objective is to develop an automatic method for identifying the bleeding frames and zones from WCE video. Different statistical features are extracted from the overlapping spatial blocks of the preprocessed WCE image in a transformed color plane containing green to red pixel ratio. The unique idea of the proposed method is to first perform unsupervised clustering of different blocks for obtaining two clusters and then extract cluster based features (CBFs). Finally, a global feature consisting of the CBFs and differential CBF is used to detect bleeding frame via supervised classification. In order to handle continuous WCE video, a post-processing scheme is introduced utilizing the feature trends in neighboring frames. The CBF along with some morphological operations is employed to identify bleeding zones. Based on extensive experimentation on several WCE videos, it is found that the proposed method offers significantly better performance in comparison to some existing methods in terms of bleeding detection accuracy, sensitivity, specificity and precision in bleeding zone detection. It is found that the bleeding detection performance obtained by using the proposed CBF based global feature is better than the feature extracted from the non-clustered image. The proposed method can reduce the burden of physicians in investigating WCE video to detect bleeding frame and zone with a high level of accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. A Spiking Neural Network in sEMG Feature Extraction.

    Science.gov (United States)

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-11-03

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.

  20. IMAGE LABELING FOR LIDAR INTENSITY IMAGE USING K-NN OF FEATURE OBTAINED BY CONVOLUTIONAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    M. Umemura

    2016-06-01

    Full Text Available We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS using K-Nearest Neighbor (KNN of feature obtained by Convolutional Neural Network (CNN. Image labeling assigns labels (e.g., road, cross-walk and road shoulder to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images and test sets (8 images. We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  1. A Narrative Methodology to Recognize Iris Patterns By Extracting Features Using Gabor Filters and Wavelets

    Directory of Open Access Journals (Sweden)

    Shristi Jha

    2016-01-01

    Full Text Available Iris pattern Recognition is an automated method of biometric identification that uses mathematical pattern-Recognition techniques on images of one or both of the irises of an individual’s eyes, whose complex random patterns are unique, stable, and can be seen from some distance. Iris recognition uses video camera technology with subtle near infrared illumination to acquire images of the detail-rich, intricate structures of the iris which are visible externally. In this narrative research paper the input image is captured and the success of the iris recognition depends on the quality of the image so the captured image is subjected to the preliminary image preprocessing techniques like localization, segmentation, normalization and noise detection followed by texture and edge feature extraction by using Gabor filters and wavelets then the processed image is matched with templates stored in the database to detect the Iris Patterns.

  2. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  3. Image Understanding and Information Extraction

    Science.gov (United States)

    1976-09-01

    with respect to the computer analysis of Image data (and especially multlspectral image data) from high-flying aircraft and earth- orbiting satellites...REAL AXIS 500 1.00:1 l.fOO 2,000 78 Figure \\k BOEING 707-320 2.000- l.OOO- 1.000- .etij to k S •wo ft - . EDO - •l .0.10 -i.eno

  4. Featured Image: Bright Dots in a Sunspot

    Science.gov (United States)

    Kohler, Susanna

    2018-03-01

    This image of a sunspot, located in in NOAA AR 12227, was captured in December 2014 by the 0.5-meter Solar Optical Telescope on board the Hinode spacecraft. This image was processed by a team of scientists led by Rahul Yadav (Udaipur Solar Observatory, Physical Research Laboratory Dewali, India) in order to examine the properties of umbral dots: transient, bright features observed in the umbral region (the central, darkest part) of a sunspot. By exploring these dots, Yadav and collaborators learned how their properties relate to the large-scale properties of the sunspots in which they form for instance, how do the number, intensities, or filling factors of dots relate to the size of a sunspots umbra? To find out more about the authors results, check out the article below.Sunspot in NOAA AR 11921. Left: umbralpenumbral boundary. Center: the isolated umbra from the sunspot. Right: The umbra with locations of umbral dots indicated by yellow plus signs. [Adapted from Yadav et al. 2018]CitationRahul Yadav et al 2018 ApJ 855 8. doi:10.3847/1538-4357/aaaeba

  5. Extracting BI-RADS Features from Portuguese Clinical Texts

    OpenAIRE

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method.

  6. GA Based Optimal Feature Extraction Method for Functional Data Classification

    OpenAIRE

    Jun Wan; Zehua Chen; Yingwu Chen; Zhidong Bai

    2010-01-01

    Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper...

  7. Feature Extraction and Pattern Identification for Anemometer Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Longji Sun

    2012-01-01

    Full Text Available Cup anemometers are commonly used for wind speed measurement in the wind industry. Anemometer malfunctions lead to excessive errors in measurement and directly influence the wind energy development for a proposed wind farm site. This paper is focused on feature extraction and pattern identification to solve the anemometer condition diagnosis problem of the PHM 2011 Data Challenge Competition. Since the accuracy of anemometers can be severely affected by the environmental factors such as icing and the tubular tower itself, in order to distinguish the cause due to anemometer failures from these factors, our methodologies start with eliminating irregular data (outliers under the influence of environmental factors. For paired data, the relation between the relative wind speed difference and the wind direction is extracted as an important feature to reflect normal or abnormal behaviors of paired anemometers. Decisions regarding the condition of paired anemometers are made by comparing the features extracted from training and test data. For shear data, a power law model is fitted using the preprocessed and normalized data, and the sum of the squared residuals (SSR is used to measure the health of an array of anemometers. Decisions are made by comparing the SSRs of training and test data. The performance of our proposed methods is evaluated through the competition website. As a final result, our team ranked the second place overall in both student and professional categories in this competition.

  8. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  9. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  10. Artificial-neural-network-based classification of mammographic microcalcifications using image structure features

    Science.gov (United States)

    Dhawan, Atam P.; Chitre, Yateen S.; Moskowitz, Myron

    1993-07-01

    Mammography associated with clinical breast examination and self-breast examination is the only effective and viable method for mass breast screening. It is however, difficult to distinguish between benign and malignant microcalcifications associated with breast cancer. Most of the techniques used in the computerized analysis of mammographic microcalcifications segment the digitized gray-level image into regions representing microcalcifications. We present a second-order gray-level histogram based feature extraction approach to extract microcalcification features. These features, called image structure features, are computed from the second-order gray-level histogram statistics, and do not require segmentation of the original image into binary regions. Several image structure features were computed for 100 cases of `difficult to diagnose' microcalcification cases with known biopsy results. These features were analyzed in a correlation study which provided a set of five best image structure features. A feedforward backpropagation neural network was used to classify mammographic microcalcifications using the image structure features. The network was trained on 10 cases of mammographic microcalcifications and tested on additional 85 `difficult-to-diagnose' microcalcifications cases using the selected image structure features. The trained network yielded good results for classification of `difficult-to- diagnose' microcalcifications into benign and malignant categories.

  11. A flexible data-driven comorbidity feature extraction framework.

    Science.gov (United States)

    Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-06-01

    Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Semantic feature extraction for interior environment understanding and retrieval

    Science.gov (United States)

    Lei, Zhibin; Liang, Yufeng

    1998-12-01

    In this paper, we propose a novel system of semantic feature extraction and retrieval for interior design and decoration application. The system, V2ID(Virtual Visual Interior Design), uses colored texture and spatial edge layout to obtain simple information about global room environment. We address the domain-specific segmentation problem in our application and present techniques for obtaining semantic features from a room environment. We also discuss heuristics for making use of these features (color, texture, edge layout, and shape), to retrieve objects from an existing database. The final resynthesized room environment, with the original scene and objects from the database, is created for the purpose of animation and virtual walk-through.

  13. Topologically Ordered Feature Extraction Based on Sparse Group Restricted Boltzmann Machines

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2015-01-01

    Full Text Available How to extract topologically ordered features efficiently from high-dimensional data is an important problem of unsupervised feature learning domains for deep learning. To address this problem, we propose a new type of regularization for Restricted Boltzmann Machines (RBMs. Adding two extra terms in the log-likelihood function to penalize the group weights and topologically ordered factors, this type of regularization extracts topologically ordered features based on sparse group Restricted Boltzmann Machines (SGRBMs. Therefore, it encourages an RBM to learn a much smoother probability distribution because its formulations turn out to be a combination of the group weight-decay and topologically ordered factor regularizations. We apply this proposed regularization scheme to image datasets of natural images and Flying Apsara images in the Dunhuang Grotto Murals at four different historical periods. The experimental results demonstrate that the combination of these two extra terms in the log-likelihood function helps to extract more discriminative features with much sparser and more aggregative hidden activation probabilities.

  14. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    Science.gov (United States)

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  15. FEATURE EVALUATION FOR BUILDING FACADE IMAGES – AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    M. Y. Yang

    2012-08-01

    Full Text Available The classification of building facade images is a challenging problem that receives a great deal of attention in the photogrammetry community. Image classification is critically dependent on the features. In this paper, we perform an empirical feature evaluation task for building facade images. Feature sets we choose are basic features, color features, histogram features, Peucker features, texture features, and SIFT features. We present an approach for region-wise labeling using an efficient randomized decision forest classifier and local features. We conduct our experiments with building facade image classification on the eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.

  16. PROCESSING OF SCANNED IMAGERY FOR CARTOGRAPHIC FEATURE EXTRACTION.

    Science.gov (United States)

    Benjamin, Susan P.; Gaydos, Leonard

    1984-01-01

    Digital cartographic data are usually captured by manually digitizing a map or an interpreted photograph or by automatically scanning a map. Both techniques first require manual photointerpretation to describe features of interest. A new approach, bypassing the laborious photointerpretation phase, is being explored using direct digital image analysis. Aerial photographs are scanned and color separated to create raster data. These are then enhanced and classified using several techniques to identify roads and buildings. Finally, the raster representation of these features is refined and vectorized. 11 refs.

  17. Extraction of urban vegetation with Pleiades multiangular images

    Science.gov (United States)

    Lefebvre, Antoine; Nabucet, Jean; Corpetti, Thomas; Courty, Nicolas; Hubert-Moy, Laurence

    2016-10-01

    Vegetation is essential in urban environments since it provides significant services in terms of health, heat, property value, ecology ... As part of the European Union Biodiversity Strategy Plan for 2020, the protection and development of green-infrastructures is strengthened in urban areas. In order to evaluate and monitor the quality of the green infra-structures, this article investigates contributions of Pléiades multi-angular images to extract and characterize low and high urban vegetation. From such images one can extract both spectral and elevation information from optical images. Our method is composed of 3 main steps : (1) the computation of a normalized Digital Surface Model from the multi-angular images ; (2) Extraction of spectral and contextual features ; (3) a classification of vegetation classes (tree and grass) performed with a random forest classifier. Results performed in the city of Rennes in France show the ability of multi-angular images to extract DEM in urban area despite building height. It also highlights its importance and its complementarity with contextual information to extract urban vegetation.

  18. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  19. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

    Science.gov (United States)

    Rajashekararadhya, S. V.; Ranjan, P. Vanaja

    India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

  20. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram

  1. CLINICAL AND IMAGING FEATURES OF OTHELLO'S SYNDROME

    Science.gov (United States)

    Graff-Radford, Jonathan; Whitwell, Jennifer L.; Geda, Yonas E.; Josephs, Keith A.

    2011-01-01

    Background Our objective was to document the clinical and imaging features of Othello's syndrome (delusional jealousy). Methods The study design was a retrospective case series of 105 patients with Othello's syndrome that were identified by using the Electronic Medical Record system of Mayo Clinic. Results The average age at onset of Othello's syndrome was 68 (25–94) years with 61.9% of patients being male. Othello's syndrome was most commonly associated with a neurological disorder (73/105) compared with psychiatric disorders (32/105). Of the patients with a neurological disorder, 76.7% had a neurodegenerative disorder. Seven of eight patients with a structural lesion associated with Othello's syndrome had right frontal lobe pathology. Voxel-based morphometry showed greater grey matter loss predominantly in the dorsolateral frontal lobes in the neurodegenerative patients with Othello's compared to matched patients with neurodegenerative disorders without Othello's syndrome. Treatment success was notable for patients with dopamine agonist induced Othello's syndrome in which all six patients had improvement in symptoms following decrease in medication. Conclusions This study demonstrates that Othello's syndrome occurs most frequently with neurological disorders. This delusion appears to be associated with dysfunction of the frontal lobes, especially right frontal lobe. PMID:21518145

  2. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Features extraction algorithm about typical railway perimeter intrusion event

    Science.gov (United States)

    Zhou, Jieyun; Wang, Chaodong; Liu, Lihai

    2017-10-01

    Research purposes: Optical fiber vibration sensing system has been widely used in the oil, gas, frontier defence, prison and power industries. But, there are few reports about the application in railway defence. That is because the surrounding environment is complicated and there are many challenges to be overcomed in the optical fiber vibration sensing system application. For example, how to eliminate the effects of vibration caused by train, the natural environments such as wind and rain and how to identify and classify the intrusion events. In order to solve these problems, the feature signals of these events should be extracted firstly. Research conclusions: (1) In optical fiber vibration sensing system based on Sagnac interferometer, the peak-to-peak value, peak-to-average ratio, standard deviation, zero-crossing rate, short-term energy and kurtosis may serve as feature signals. (2) The feature signals of resting state, climbing concrete fence, breaking barbed wire, knocking concrete fence and rainstorm have been extracted, which shows significant difference among each other. (3) The research conclusions can be used in the identification and classification of intrusion events.

  4. Lung image patch classification with automatic feature learning.

    Science.gov (United States)

    Li, Qing; Cai, Weidong; Feng, David Dagan

    2013-01-01

    Image patch classification is an important task in many different medical imaging applications. The classification performance is usually highly dependent on the effectiveness of image feature vectors. While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Automatic feature learning from image data has thus emerged as a different trend recently, to capture the intrinsic image features without manual feature design. In this paper, we propose to create multi-scale feature extractors based on an unsupervised learning algorithm; and obtain the image feature vectors by convolving the feature extractors with the image patches. The auto-generated image features are data-adaptive and highly descriptive. A simple classification scheme is then used to classify the image patches. The proposed method is generic in nature and can be applied to different imaging domains. For evaluation, we perform image patch classification to differentiate various lung tissue patterns commonly seen in interstitial lung disease (ILD), and demonstrate promising results.

  5. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  6. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  7. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  8. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  9. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  10. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  11. Corner Feature Extraction: Techniques for Landmark Based Navigation Systems

    OpenAIRE

    Namoshe, Molaletsa; Matsebe, Oudetse; Tlale, Nkgatho

    2010-01-01

    In this paper we discussed the results of an EKF SLAM using real data logged and computed offline. One of the most important parts of the SLAM process is to accurately map the environment the robot is exploring and localize in it. To achieve this however, is depended on the precise acquirement of features extracted from the external sensor. We looked at corner detection methods and we proposed an improved version of the method discussed in section 2.1.1. It transpired that methods found in th...

  12. Three-dimensional object recognition via integral imaging and scale invariant feature transform

    Science.gov (United States)

    Yi, Faliu; Moon, Inkyu

    2014-06-01

    We propose a three-dimensional (3D) object recognition approach via computational integral imaging and scale invariant feature transform (SIFT) that can be invariance to object changes in illumination, scale, rotation and affine. Usually, the matching between features extracted in reference object and that in computationally reconstructed image should be done for 3D object recognition. However, this process needs to alternately illustrate all of the depth images first which will affect the recognition efficiency. Considering that there are a set of elemental images with different viewpoint in integral imaging, we first recognize the object in 2D image by using five elemental images and then choose one elemental image with the most matching points from the five images. This selected image will include more information related to the reference object. Finally, we can use this selected elemental image and its neighboring elemental images which should also contain much reference object information to calculate the disparity with SIFT algorithm. Consequently, the depth of the 3D object can be achieved with stereo camera theory and the recognized 3D object can be reconstructed in computational integral imaging. This method sufficiently utilizes the different information provided by elemental images and the robust feature extraction SIFT algorithm to recognize 3D objects.

  13. Feature extraction and dimensionality reduction for mass spectrometry data.

    Science.gov (United States)

    Liu, Yihui

    2009-09-01

    Mass spectrometry is being used to generate protein profiles from human serum, and proteomic data obtained from mass spectrometry have attracted great interest for the detection of early stage cancer. However, high dimensional mass spectrometry data cause considerable challenges. In this paper we propose a feature extraction algorithm based on wavelet analysis for high dimensional mass spectrometry data. A set of wavelet detail coefficients at different scale is used to detect the transient changes of mass spectrometry data. The experiments are performed on 2 datasets. A highly competitive accuracy, compared with the best performance of other kinds of classification models, is achieved. Experimental results show that the wavelet detail coefficients are efficient way to characterize features of high dimensional mass spectra and reduce the dimensionality of high dimensional mass spectra.

  14. Feature coding for image representation and recognition

    CERN Document Server

    Huang, Yongzhen

    2015-01-01

    This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D

  15. Fusion of Pixel-based and Object-based Features for Road Centerline Extraction from High-resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    CAO Yungang

    2016-10-01

    Full Text Available A novel approach for road centerline extraction from high spatial resolution satellite imagery is proposed by fusing both pixel-based and object-based features. Firstly, texture and shape features are extracted at the pixel level, and spectral features are extracted at the object level based on multi-scale image segmentation maps. Then, extracted multiple features are utilized in the fusion framework of Dempster-Shafer evidence theory to roughly identify the road network regions. Finally, an automatic noise removing algorithm combined with the tensor voting strategy is presented to accurately extract the road centerline. Experimental results using high-resolution satellite imageries with different scenes and spatial resolutions showed that the proposed approach compared favorably with the traditional methods, particularly in the aspect of eliminating the salt noise and conglutination phenomenon.

  16. Radiomic features analysis in computed tomography images of lung nodule classification.

    Directory of Open Access Journals (Sweden)

    Chia-Hung Chen

    Full Text Available Radiomics, which extract large amount of quantification image features from diagnostic medical images had been widely used for prognostication, treatment response prediction and cancer detection. The treatment options for lung nodules depend on their diagnosis, benign or malignant. Conventionally, lung nodule diagnosis is based on invasive biopsy. Recently, radiomics features, a non-invasive method based on clinical images, have shown high potential in lesion classification, treatment outcome prediction.Lung nodule classification using radiomics based on Computed Tomography (CT image data was investigated and a 4-feature signature was introduced for lung nodule classification. Retrospectively, 72 patients with 75 pulmonary nodules were collected. Radiomics feature extraction was performed on non-enhanced CT images with contours which were delineated by an experienced radiation oncologist.Among the 750 image features in each case, 76 features were found to have significant differences between benign and malignant lesions. A radiomics signature was composed of the best 4 features which included Laws_LSL_min, Laws_SLL_energy, Laws_SSL_skewness and Laws_EEL_uniformity. The accuracy using the signature in benign or malignant classification was 84% with the sensitivity of 92.85% and the specificity of 72.73%.The classification signature based on radiomics features demonstrated very good accuracy and high potential in clinical application.

  17. Feature Extraction for Bearing Prognostics and Health Management (PHM) - A Survey (Preprint)

    National Research Council Canada - National Science Library

    Yan, Weizhong; Qiu, Hai; Iyer, Naresh

    2008-01-01

    Feature extraction in bearing PHM involves extracting characteristic signatures from the original sensor measurements, which are sensitive to bearing conditions and thus most useful in determining bearing faults...

  18. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....... of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature learning, we develop a novel codebook learning method, based on the Cauchy-Schwarz Quadratic Mutual Information (CSQMI) measure, to learn a compact...

  19. Real-time hypothesis driven feature extraction on parallel processing architectures

    DEFF Research Database (Denmark)

    Granmo, O.-C.; Jensen, Finn Verner

    2002-01-01

    Feature extraction in content-based indexing of media streams is often computational intensive. Typically, a parallel processing architecture is necessary for real-time performance when extracting features brute force. On the other hand, Bayesian network based systems for hypothesis driven feature...... extraction, which selectively extract relevant features one-by-one, have in some cases achieved real-time performance on single processing element architectures. In this paperwe propose a novel technique which combines the above two approaches. Features are selectively extracted in parallelizable sets...... parallelizable feature sets real-time in a goal oriented fashion, even when some features are pairwise highly correlated and causally complexly interacting....

  20. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  1. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim

    2016-02-01

    Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    Science.gov (United States)

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  3. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    Science.gov (United States)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  4. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features.

    Science.gov (United States)

    Xu, Yan; Jia, Zhipeng; Wang, Liang-Bo; Ai, Yuqing; Zhang, Fang; Lai, Maode; Chang, Eric I-Chao

    2017-05-26

    Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features.

  5. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    Science.gov (United States)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent ( 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered

  6. Improvement of a multi-organ extraction algorithm in an abdominal CAD system based on features in neighbouring regions

    International Nuclear Information System (INIS)

    Shimizu, A.; Sakurai, H.; Kobatake, H.; Nawano, S.; Smutek, D.

    2007-01-01

    This paper proposes a new MAP-based segmentation that takes into account not only features measured at the voxel of interest but also features within neighbouring regions. It sequentially maximizes three kinds of posterior probabilities to extract 12 organs in abdominal CT images. This paper shows the results of applying the algorithm to non-contrast 3D CT images of 15 patients. (orig.)

  7. Two-dimensional reduction PCA: a novel approach for feature extraction, representation, and recognition

    Science.gov (United States)

    Mutelo, R. M.; Khor, L. C.; Woo, W. L.; Dlay, S. S.

    2006-01-01

    We develop a novel image feature extraction and recognition method two-dimensional reduction principal component analysis (2D-RPCA)). A two dimension image matrix contains redundancy information between columns and between rows. Conventional PCA removes redundancy by transforming the 2D image matrices into a vector where dimension reduction is done in one direction (column wise). Unlike 2DPCA, 2D-RPCA eliminates redundancies between image rows and compresses the data in rows, and finally eliminates redundancies between image columns and compress the data in columns. Therefore, 2D-RPCA has two image compression stages: firstly, it eliminates the redundancies between image rows and compresses the information optimally within a few rows. Finally, it eliminates the redundancies between image columns and compresses the information within a few columns. This sequence is selected in such a way that the recognition accuracy is optimized. As a result it has a better representation as the information is more compact in a smaller area. The classification time is reduced significantly (smaller feature matrix). Furthermore, the computational complexity of the proposed algorithm is reduced. The result is that 2D-RPCA classifies image faster, less memory storage and yields higher recognition accuracy. The ORL database is used as a benchmark. The new algorithm achieves a recognition rate of 95.0% using 9×5 feature matrix compared to the recognition rate of 93.0% with a 112×7 feature matrix for the 2DPCA method and 90.5% for PCA (Eigenfaces) using 175 principal components.

  8. Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images

    Science.gov (United States)

    Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan

    2017-08-01

    Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.

  9. Introduction: feature issue on In Vivo Microcirculation Imaging

    OpenAIRE

    Dunn, Andrew K.; Leitgeb, Rainer; Wang, Ruikang K.; Zhang, Hao F.

    2011-01-01

    The editors introduce the Biomedical Optics Express feature issue, “In Vivo Microcirculation Imaging,” which includes 14 contributions from the biomedical optics community, covering such imaging techniques as optical coherence tomography, photoacoustic microscopy, laser Doppler /speckle imaging, and near infrared spectroscopy and fluorescence imaging.

  10. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  11. Survival analysis for high-dimensional, heterogeneous medical data: Exploring feature extraction as an alternative to feature selection.

    Science.gov (United States)

    Pölsterl, Sebastian; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin

    2016-09-01

    In clinical research, the primary interest is often the time until occurrence of an adverse event, i.e., survival analysis. Its application to electronic health records is challenging for two main reasons: (1) patient records are comprised of high-dimensional feature vectors, and (2) feature vectors are a mix of categorical and real-valued features, which implies varying statistical properties among features. To learn from high-dimensional data, researchers can choose from a wide range of methods in the fields of feature selection and feature extraction. Whereas feature selection is well studied, little work focused on utilizing feature extraction techniques for survival analysis. We investigate how well feature extraction methods can deal with features having varying statistical properties. In particular, we consider multiview spectral embedding algorithms, which specifically have been developed for these situations. We propose to use random survival forests to accurately determine local neighborhood relations from right censored survival data. We evaluated 10 combinations of feature extraction methods and 6 survival models with and without intrinsic feature selection in the context of survival analysis on 3 clinical datasets. Our results demonstrate that for small sample sizes - less than 500 patients - models with built-in feature selection (Cox model with ℓ1 penalty, random survival forest, and gradient boosted models) outperform feature extraction methods by a median margin of 6.3% in concordance index (inter-quartile range: [-1.2%;14.6%]). If the number of samples is insufficient, feature extraction methods are unable to reliably identify the underlying manifold, which makes them of limited use in these situations. For large sample sizes - in our experiments, 2500 samples or more - feature extraction methods perform as well as feature selection methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    Science.gov (United States)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  13. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  14. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  15. Featured Image: Revealing Hidden Objects with Color

    Science.gov (United States)

    Kohler, Susanna

    2018-02-01

    Stunning color astronomical images can often be the motivation for astronomers to continue slogging through countless data files, calculations, and simulations as we seek to understand the mysteries of the universe. But sometimes the stunning images can, themselves, be the source of scientific discovery. This is the case with the below image of Lynds Dark Nebula 673, located in the Aquila constellation, that was captured with the Mayall 4-meter telescope at Kitt Peak National Observatory by a team of scientists led by Travis Rector (University of Alaska Anchorage). After creating the image with a novel color-composite imaging method that reveals faint H emission (visible in red in both images here), Rector and collaborators identified the presence of a dozen new Herbig-Haro objects small cloud patches that are caused when material is energetically flung out from newly born stars. The image adapted above shows three of the new objects, HH 118789, aligned with two previously known objects, HH 32 and 332 suggesting they are driven by the same source. For more beautiful images and insight into the authors discoveries, check out the article linked below!Full view of Lynds Dark Nebula 673. Click for the larger view this beautiful composite image deserves! [T.A. Rector (University of Alaska Anchorage) and H. Schweiker (WIYN and NOAO/AURA/NSF)]CitationT. A. Rector et al 2018 ApJ 852 13. doi:10.3847/1538-4357/aa9ce1

  16. Object learning improves feature extraction but does not improve feature selection.

    Directory of Open Access Journals (Sweden)

    Linus Holm

    Full Text Available A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1 select more informative image locations upon which to fixate their eyes, or 2 extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice.

  17. Second order Statistical Texture Features from a New CSLBPGLCM for Ultrasound Kidney Images Retrieval

    Directory of Open Access Journals (Sweden)

    Chelladurai CALLINS CHRISTIYANA

    2013-12-01

    Full Text Available This work proposes a new method called Center Symmetric Local Binary Pattern Grey Level Co-occurrence Matrix (CSLBPGLCM for the purpose of extracting second order statistical texture features in ultrasound kidney images. These features are then feed into ultrasound kidney images retrieval system for the point of medical applications. This new GLCM matrix combines the benefit of CSLBP and conventional GLCM. The main intention of this CSLBPGLCM is to reduce the number of grey levels in an image by not simply accumulating the grey levels but incorporating another statistical texture feature in it. The proposed approach is cautiously evaluated in ultrasound kidney images retrieval system and has been compared with conventional GLCM. It is experimentally proved that the proposed method increases the retrieval efficiency, accuracy and reduces the time complexity of ultrasound kidney images retrieval system by means of second order statistical texture features.

  18. Topology reduction in deep convolutional feature extraction networks

    Science.gov (United States)

    Wiatowski, Thomas; Grohs, Philipp; Bölcskei, Helmut

    2017-08-01

    Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a-N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 - ɛ) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes - for fixed network depth N - the average number of operationally significant nodes per layer.

  19. Joint analysis of histopathology image features and gene expression in breast cancer.

    Science.gov (United States)

    Popovici, Vlad; Budinská, Eva; Čápková, Lenka; Schwarz, Daniel; Dušek, Ladislav; Feit, Josef; Jaggi, Rolf

    2016-05-11

    Genomics and proteomics are nowadays the dominant techniques for novel biomarker discovery. However, histopathology images contain a wealth of information related to the tumor histology, morphology and tumor-host interactions that is not accessible through these techniques. Thus, integrating the histopathology images in the biomarker discovery workflow could potentially lead to the identification of new image-based biomarkers and the refinement or even replacement of the existing genomic and proteomic signatures. However, extracting meaningful and robust image features to be mined jointly with genomic (and clinical, etc.) data represents a real challenge due to the complexity of the images. We developed a framework for integrating the histopathology images in the biomarker discovery workflow based on the bag-of-features approach - a method that has the advantage of being assumption-free and data-driven. The images were reduced to a set of salient patterns and additional measurements of their spatial distribution, with the resulting features being directly used in a standard biomarker discovery application. We demonstrated this framework in a search for prognostic biomarkers in breast cancer which resulted in the identification of several prognostic image features and a promising multimodal (imaging and genomic) prognostic signature. The source code for the image analysis procedures is freely available. The framework proposed allows for a joint analysis of images and gene expression data. Its application to a set of breast cancer cases resulted in image-based and combined (image and genomic) prognostic scores for relapse-free survival.

  20. Research on Techniques of Multifeatures Extraction for Tongue Image and Its Application in Retrieval

    Directory of Open Access Journals (Sweden)

    Liyan Chen

    2017-01-01

    Full Text Available Tongue diagnosis is one of the important methods in the Chinese traditional medicine. Doctors can judge the disease’s situation by observing patient’s tongue color and texture. This paper presents a novel approach to extract color and texture features of tongue images. First, we use improved GLA (Generalized Lloyd Algorithm to extract the main color of tongue image. Considering that the color feature cannot fully express tongue image information, the paper analyzes tongue edge’s texture features and proposes an algorithm to extract them. Then, we integrate the two features in retrieval by different weight. Experimental results show that the proposed method can improve the detection rate of lesion in tongue image relative to single feature retrieval.

  1. PCA Fault Feature Extraction in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2010-08-01

    Full Text Available Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc. may change significantly. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Therefore, utilizing real-time measurements of phasor measurement unit, based on principal components analysis technology, we have extracted successfully the distinct features of fault component. Of course, because of the complexity of different types of faults in electric power system, there still exists enormous problems need a close and intensive study.

  2. Bottle-Neck Feature Extraction Structures for Multilingual Training and Porting (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Bottle-Neck Feature Extraction Structures for Multilingual Training and Porting...SBN) feature extraction is a crucial part of modern automatic speech recognition (ASR) systems. The SBN network traditionally contains a hidden...2016. Keywords: DNN topology; Stacked Bottle-Neck; feature extraction ; multilingual training; system porting 1. Introduction One of the recent

  3. Hand veins feature extraction using DT-CNNS

    Science.gov (United States)

    Malki, Suleyman; Spaanenburg, Lambert

    2007-05-01

    As the identification process is based on the unique patterns of the users, biometrics technologies are expected to provide highly secure authentication systems. The existing systems using fingerprints or retina patterns are, however, very vulnerable. One's fingerprints are accessible as soon as the person touches a surface, while a high resolution camera easily captures the retina pattern. Thus, both patterns can easily be "stolen" and forged. Beside, technical considerations decrease the usability for these methods. Due to the direct contact with the finger, the sensor gets dirty, which decreases the authentication success ratio. Aligning the eye with a camera to capture the retina pattern gives uncomfortable feeling. On the other hand, vein patterns of either a palm of the hand or a single finger offer stable, unique and repeatable biometrics features. A fingerprint-based identification system using Cellular Neural Networks has already been proposed by Gao. His system covers all stages of a typical fingerprint verification procedure from Image Preprocessing to Feature Matching. This paper performs a critical review of the individual algorithmic steps. Notably, the operation of False Feature Elimination is applied only once instead of 3 times. Furthermore, the number of iterations is limited to 1 for all used templates. Hence, the computational need of the feedback contribution is removed. Consequently the computational effort is drastically reduced without a notable chance in quality. This allows a full integration of the detection mechanism. The system is prototyped on a Xilinx Virtex II Pro P30 FPGA.

  4. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  5. A DFT-Based Method of Feature Extraction for Palmprint Recognition

    Science.gov (United States)

    Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru

    Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.

  6. Polarimetric SAR Image Classification Using Multiple-feature Fusion and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Sun Xun

    2016-12-01

    Full Text Available In this paper, we propose a supervised classification algorithm for Polarimetric Synthetic Aperture Radar (PolSAR images using multiple-feature fusion and ensemble learning. First, we extract different polarimetric features, including extended polarimetric feature space, Hoekman, Huynen, H/alpha/A, and fourcomponent scattering features of PolSAR images. Next, we randomly select two types of features each time from all feature sets to guarantee the reliability and diversity of later ensembles and use a support vector machine as the basic classifier for predicting classification results. Finally, we concatenate all prediction probabilities of basic classifiers as the final feature representation and employ the random forest method to obtain final classification results. Experimental results at the pixel and region levels show the effectiveness of the proposed algorithm.

  7. Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions

    Directory of Open Access Journals (Sweden)

    Gert Van Dijck

    2011-07-01

    Full Text Available Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features.

  8. Caroli's disease: magnetic resonance imaging features

    International Nuclear Information System (INIS)

    Guy, France; Cognet, Francois; Dranssart, Marie; Cercueil, Jean-Pierre; Conciatori, Laurent; Krause, Denis

    2002-01-01

    Our objective was to describe the main aspects of MR imaging in Caroli's disease. Magnetic resonance cholangiography with a dynamic contrast-enhanced study was performed in nine patients with Caroli's disease. Bile duct abnormalities, lithiasis, dot signs, hepatic enhancement, renal abnormalities, and evidence of portal hypertension were evaluated. Three MR imaging patterns of Caroli's disease were found. In all but two patients, MR imaging findings were sufficient to confirm the diagnosis. Moreover, MR imaging provided information about the severity, location, and extent of liver involvement. This information was useful in planning the best therapeutic strategy. Magnetic resonance cholangiography with a dynamic contrast-enhanced study is a good screening tool for Caroli's disease. Direct cholangiography should be reserved for confirming doubtful cases. (orig.)

  9. Multimodality imaging features of hereditary multiple exostoses

    OpenAIRE

    Kok, H K; Fitzgerald, L; Campbell, N; Lyburn, I D; Munk, P L; Buckley, O; Torreggiani, W C

    2013-01-01

    Hereditary multiple exostoses (HME) or diaphyseal aclasis is an inherited disorder characterised by the formation of multiple osteochondromas, which are cartilage-capped osseous outgrowths, and the development of associated osseous deformities. Individuals with HME may be asymptomatic or develop clinical symptoms, which prompt imaging studies. Different modalities ranging from plain radiographs to cross-sectional and nuclear medicine imaging studies can be helpful in the diagnosis and detecti...

  10. Quality assessment of remote sensing image fusion using feature-based fourth-order correlation coefficient

    Science.gov (United States)

    Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing

    2016-04-01

    In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.

  11. Image Retrieval based on Integration between Color and Geometric Moment Features

    International Nuclear Information System (INIS)

    Saad, M.H.; Saleh, H.I.; Konbor, H.; Ashour, M.

    2012-01-01

    Content based image retrieval is the retrieval of images based on visual features such as colour, texture and shape. .the Current approaches to CBIR differ in terms of which image features are extracted; recent work deals with combination of distances or scores from different and usually independent representations in an attempt to induce high level semantics from the low level descriptors of the images. content-based image retrieval has many application areas such as, education, commerce, military, searching, commerce, and biomedicine and Web image classification. This paper proposes a new image retrieval system, which uses color and geometric moment feature to form the feature vectors. Bhattacharyya distance and histogram intersection are used to perform feature matching. This framework integrates the color histogram which represents the global feature and geometric moment as local descriptor to enhance the retrieval results. The proposed technique is proper for precisely retrieving images even in deformation cases such as geometric deformations and noise. It is tested on a standard the results shows that a combination of our approach as a local image descriptor with other global descriptors outperforms other approaches.

  12. Detecting Image Splicing Using Merged Features in Chroma Space

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2014-01-01

    Full Text Available Image splicing is an image editing method to copy a part of an image and paste it onto another image, and it is commonly followed by postprocessing such as local/global blurring, compression, and resizing. To detect this kind of forgery, the image rich models, a feature set successfully used in the steganalysis is evaluated on the splicing image dataset at first, and the dominant submodel is selected as the first kind of feature. The selected feature and the DCT Markov features are used together to detect splicing forgery in the chroma channel, which is convinced effective in splicing detection. The experimental results indicate that the proposed method can detect splicing forgeries with lower error rate compared to the previous literature.

  13. Pomegranate peel and peel extracts: chemistry and food features.

    Science.gov (United States)

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Komeil Rokni

    2014-05-01

    Full Text Available Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010 in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. This study modeled the spatiotemporal changes of Lake Urmia in the period 2000–2013 using the multi-temporal Landsat 5-TM, 7-ETM+ and 8-OLI images. In doing so, the applicability of different satellite-derived indexes including Normalized Difference Water Index (NDWI, Modified NDWI (MNDWI, Normalized Difference Moisture Index (NDMI, Water Ratio Index (WRI, Normalized Difference Vegetation Index (NDVI, and Automated Water Extraction Index (AWEI were investigated for the extraction of surface water from Landsat data. Overall, the NDWI was found superior to other indexes and hence it was used to model the spatiotemporal changes of the lake. In addition, a new approach based on Principal Components of multi-temporal NDWI (NDWI-PCs was proposed and evaluated for surface water change detection. The results indicate an intense decreasing trend in Lake Urmia surface area in the period 2000–2013, especially between 2010 and 2013 when the lake lost about one third of its surface area compared to the year 2000. The results illustrate the effectiveness of the NDWI-PCs approach for surface water change detection, especially in detecting the changes between two and three different times, simultaneously.

  16. Representing images using curvilinear feature driven subdivision surfaces.

    Science.gov (United States)

    Zhou, Hailing; Zheng, Jianmin; Wei, Lei

    2014-08-01

    This paper presents a subdivision-based vector graphics for image representation and creation. The graphics representation is a subdivision surface defined by a triangular mesh augmented with color attribute at vertices and feature attribute at edges. Special cubic B-splines are proposed to describe curvilinear features of an image. New subdivision rules are then designed accordingly, which are applied to the mesh and the color attribute to define the spatial distribution and piecewise-smoothly varying colors of the image. A sharpness factor is introduced to control the color transition across the curvilinear edges. In addition, an automatic algorithm is developed to convert a raster image into such a vector graphics representation. The algorithm first detects the curvilinear features of the image, then constructs a triangulation based on the curvilinear edges and feature attributes, and finally iteratively optimizes the vertex color attributes and updates the triangulation. Compared with existing vector-based image representations, the proposed representation and algorithm have the following advantages in addition to the common merits (such as editability and scalability): 1) they allow flexible mesh topology and handle images or objects with complicated boundaries or features effectively; 2) they are able to faithfully reconstruct curvilinear features, especially in modeling subtle shading effects around feature curves; and 3) they offer a simple way for the user to create images in a freehand style. The effectiveness of the proposed method has been demonstrated in experiments.

  17. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  18. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  19. A novel feature extraction approach for microarray data based on multi-algorithm fusion.

    Science.gov (United States)

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.

  20. Adapting Local Features for Face Detection in Thermal Image.

    Science.gov (United States)

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  1. AN EVALUATION OF FEATURE LEARNING METHODS FOR HIGH RESOLUTION IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Tokarczyk

    2012-07-01

    Full Text Available Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA and deep belief networks (DBN. We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.

  2. Gross feature recognition of Anatomical Images based on Atlas grid (GAIA): Incorporating the local discrepancy between an atlas and a target image to capture the features of anatomic brain MRI.

    Science.gov (United States)

    Qin, Yuan-Yuan; Hsu, Johnny T; Yoshida, Shoko; Faria, Andreia V; Oishi, Kumiko; Unschuld, Paul G; Redgrave, Graham W; Ying, Sarah H; Ross, Christopher A; van Zijl, Peter C M; Hillis, Argye E; Albert, Marilyn S; Lyketsos, Constantine G; Miller, Michael I; Mori, Susumu; Oishi, Kenichi

    2013-01-01

    We aimed to develop a new method to convert T1-weighted brain MRIs to feature vectors, which could be used for content-based image retrieval (CBIR). To overcome the wide range of anatomical variability in clinical cases and the inconsistency of imaging protocols, we introduced the Gross feature recognition of Anatomical Images based on Atlas grid (GAIA), in which the local intensity alteration, caused by pathological (e.g., ischemia) or physiological (development and aging) intensity changes, as well as by atlas-image misregistration, is used to capture the anatomical features of target images. As a proof-of-concept, the GAIA was applied for pattern recognition of the neuroanatomical features of multiple stages of Alzheimer's disease, Huntington's disease, spinocerebellar ataxia type 6, and four subtypes of primary progressive aphasia. For each of these diseases, feature vectors based on a training dataset were applied to a test dataset to evaluate the accuracy of pattern recognition. The feature vectors extracted from the training dataset agreed well with the known pathological hallmarks of the selected neurodegenerative diseases. Overall, discriminant scores of the test images accurately categorized these test images to the correct disease categories. Images without typical disease-related anatomical features were misclassified. The proposed method is a promising method for image feature extraction based on disease-related anatomical features, which should enable users to submit a patient image and search past clinical cases with similar anatomical phenotypes.

  3. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning play a critical role for visual perception tasks. We focus on improving the robustness of the kernel descriptors (KDES) by embedding context cues and further learning a compact and discriminative feature codebook for feature reduction using Rényi entropy based mutu....... Experimental results show that our method has promising potential for visual object recognition and detection applications....... as the information about the underlying labels of the CKD using CSQMI. Thus the resulting codebook and reduced CKD are discriminative. We verify the effectiveness of our method on several public image benchmark datasets such as YaleB, Caltech-101 and CIFAR-10, as well as a challenging chicken feet dataset of our own...

  4. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  5. An automatic glioma grading method based on multi-feature extraction and fusion.

    Science.gov (United States)

    Zhan, Tianming; Feng, Piaopiao; Hong, Xunning; Lu, Zhenyu; Xiao, Liang; Zhang, Yudong

    2017-07-20

    An accurate assessment of tumor malignancy grade in the preoperative situation is important for clinical management. However, the manual grading of gliomas from MRIs is both a tiresome and time consuming task for radiologists. Thus, it is a priority to design an automatic and effective computer-aided diagnosis (CAD) tool to assist radiologists in grading gliomas. To design an automatic computer-aided diagnosis for grading gliomas using multi-sequence magnetic resonance imaging. The proposed method consists of two steps: (1) the features of high and low grade gliomas are extracted from multi-sequence magnetic resonance images, and (2) then, a KNN classifier is trained to grade the gliomas. In the feature extraction step, the intensity, volume, and local binary patterns (LBP) of the gliomas are extracted, and PCA is used to reduce the data dimension. The proposed "Intensity-Volume-LBP-PCA-KNN" method is validated on the MICCAI 2015 BraTS challenge dataset, and an average grade accuracy of 87.59% is obtained. The proposed method is an effective method for automatically grading gliomas and can be applied to real situations.

  6. Deep SOMs for automated feature extraction and classification from big data streaming

    Science.gov (United States)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  7. Diffuse pancreatic ductal adenocarcinoma: Characteristic imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young Jun [Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Byun, Jae Ho [Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of)], E-mail: jhbyun@amc.seoul.kr; Kim, Ji-Youn [Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Myung-Hwan [Department of Internal Medicine, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Jang, Se Jin [Department of Pathology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Ha, Hyun Kwon; Lee, Moon-Gyu [Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of)

    2008-08-15

    Purpose: To evaluate imaging findings of diffuse pancreatic ductal adenocarcinoma. Materials and methods: We included 14 patients (4 men and 10 women; mean age, 64.5 years) with diffuse pancreatic ductal adenocarcinoma on the basis of retrospective radiological review. Two radiologists retrospectively reviewed 14 CT scans in consensus with respect to the following: tumor site, peripheral capsule-like structure, dilatation of intratumoral pancreatic duct, parenchymal atrophy, and ancillary findings. Eight magnetic resonance (MR) examinations with MR cholangiopancreatography (MRCP) and seven endoscopic retrograde cholangiopancreatography (ERCP) were also reviewed, focusing on peripheral capsule-like structure and dilatation of intratumoral pancreatic duct. Results: CT revealed tumor localization to the body and tail in 11 (79%) patients and peripheral capsule-like structure in 13 (93%). The intratumoral pancreatic duct was not visible in 13 (93%). Pancreatic parenchymal atrophy was not present in all 14 patients. Tumor invasion of vessels was observed in all 14 patients and of neighbor organs in 8 (57%). On contrast-enhanced T1-weighted MR images, peripheral capsule-like structure showed higher signal intensity in five patients (71%). In all 11 patients with MRCP and/or ERCP, the intratumoral pancreatic duct was not dilated. Conclusion: Diffuse pancreatic ductal adenocarcinoma has characteristic imaging findings, including peripheral capsule-like structure, local invasiveness, and absence of both dilatation of intratumoral pancreatic duct and parenchymal atrophy.

  8. Featured Image: A Filament Forms and Erupts

    Science.gov (United States)

    Kohler, Susanna

    2017-06-01

    This dynamic image of active region NOAA 12241 was captured by the Solar Dynamics Observatorys Atmospheric Imaging Assembly in December 2014. Observations of this region from a number of observatories and instruments recently presented by Jincheng Wang (University of Chinese Academy of Sciences) and collaborators reveal details about the formation and eruption of a long solar filament. Wang and collaborators show that the right part of the filament formed by magnetic reconnection between two bundles of magnetic field lines, while the left part formed as a result of shearing motion. When these two parts interacted, the filament erupted. You can read more about the teams results in the article linked below. Also, check out this awesome video of the filament formation and eruption, again by SDO/AIA:http://cdn.iopscience.com/images/0004-637X/839/2/128/Full/apjaa6bf3f1_video.mp4CitationJincheng Wang et al 2017 ApJ 839 128. doi:10.3847/1538-4357/aa6bf3

  9. Image mosaicking using SURF features of line segments.

    Science.gov (United States)

    Yang, Zhanlong; Shen, Dinggang; Yap, Pew-Thian

    2017-01-01

    In this paper, we present a novel image mosaicking method that is based on Speeded-Up Robust Features (SURF) of line segments, aiming to achieve robustness to incident scaling, rotation, change in illumination, and significant affine distortion between images in a panoramic series. Our method involves 1) using a SURF detection operator to locate feature points; 2) rough matching using SURF features of directed line segments constructed via the feature points; and 3) eliminating incorrectly matched pairs using RANSAC (RANdom SAmple Consensus). Experimental results confirm that our method results in high-quality panoramic mosaics that are superior to state-of-the-art methods.

  10. Telescopic Vector Composition and Polar Accumulated Motion Residuals for Feature Extraction in Arabic Sign Language Recognition

    Directory of Open Access Journals (Sweden)

    Assaleh K

    2007-01-01

    Full Text Available This work introduces two novel approaches for feature extraction applied to video-based Arabic sign language recognition, namely, motion representation through motion estimation and motion representation through motion residuals. In the former, motion estimation is used to compute the motion vectors of a video-based deaf sign or gesture. In the preprocessing stage for feature extraction, the horizontal and vertical components of such vectors are rearranged into intensity images and transformed into the frequency domain. In the second approach, motion is represented through motion residuals. The residuals are then thresholded and transformed into the frequency domain. Since in both approaches the temporal dimension of the video-based gesture needs to be preserved, hidden Markov models are used for classification tasks. Additionally, this paper proposes to project the motion information in the time domain through either telescopic motion vector composition or polar accumulated differences of motion residuals. The feature vectors are then extracted from the projected motion information. After that, model parameters can be evaluated by using simple classifiers such as Fisher's linear discriminant. The paper reports on the classification accuracy of the proposed solutions. Comparisons with existing work reveal that up to 39% of the misclassifications have been corrected.

  11. Telescopic Vector Composition and Polar Accumulated Motion Residuals for Feature Extraction in Arabic Sign Language Recognition

    Directory of Open Access Journals (Sweden)

    T. Shanableh

    2007-10-01

    Full Text Available This work introduces two novel approaches for feature extraction applied to video-based Arabic sign language recognition, namely, motion representation through motion estimation and motion representation through motion residuals. In the former, motion estimation is used to compute the motion vectors of a video-based deaf sign or gesture. In the preprocessing stage for feature extraction, the horizontal and vertical components of such vectors are rearranged into intensity images and transformed into the frequency domain. In the second approach, motion is represented through motion residuals. The residuals are then thresholded and transformed into the frequency domain. Since in both approaches the temporal dimension of the video-based gesture needs to be preserved, hidden Markov models are used for classification tasks. Additionally, this paper proposes to project the motion information in the time domain through either telescopic motion vector composition or polar accumulated differences of motion residuals. The feature vectors are then extracted from the projected motion information. After that, model parameters can be evaluated by using simple classifiers such as Fisher's linear discriminant. The paper reports on the classification accuracy of the proposed solutions. Comparisons with existing work reveal that up to 39% of the misclassifications have been corrected.

  12. Feature Extraction For Application of Heart Abnormalities Detection Through Iris Based on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Entin Martiana Kusumaningtyas

    2018-01-01

    Full Text Available As the WHO says, heart disease is the leading cause of death and examining it by current methods in hospitals is not cheap. Iridology is one of the most popular alternative ways to detect the condition of organs. Iridology is the science that enables a health practitioner or non-expert to study signs in the iris that are capable of showing abnormalities in the body, including basic genetics, toxin deposition, circulation of dams, and other weaknesses. Research on computer iridology has been done before. One is about the computer's iridology system to detect heart conditions. There are several stages such as capture eye base on target, pre-processing, cropping, segmentation, feature extraction and classification using Thresholding algorithms. In this study, feature extraction process performed using binarization method by transforming the image into black and white. In this process we compare the two approaches of binarization method, binarization based on grayscale images and binarization based on proximity. The system we proposed was tested at Mugi Barokah Clinic Surabaya.  We conclude that the image grayscale approach performs better classification than using proximity.

  13. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  14. Entropy based unsupervised Feature Selection in digital mammogram image using rough set theory.

    Science.gov (United States)

    Velayutham, C; Thangavel, K

    2012-01-01

    Feature Selection (FS) is a process, which attempts to select features, which are more informative. In the supervised FS methods various feature subsets are evaluated using an evaluation function or metric to select only those features, which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised FS. However, in unsupervised learning, decision class labels are not provided. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. In this paper, a novel unsupervised FS in mammogram image, using rough set-based entropy measures, is proposed. A typical mammogram image processing system generally consists of mammogram image acquisition, pre-processing of image, segmentation, features extracted from the segmented mammogram image. The proposed method is used to select features from data set, the method is compared with the existing rough set-based supervised FS methods and classification performance of both methods are recorded and demonstrates the efficiency of the method.

  15. Featured Image: Waves in a Coronal Fan

    Science.gov (United States)

    Kohler, Susanna

    2017-09-01

    The inset in this Solar Dynamics Observatory image shows a close-up view of a stunning coronal fan extending above the Suns atmosphere. These sweeping loops were observed on 7 March 2012 by a number of observatories, revealing the first known evidence of standing slow magnetoacoustic waves in cool coronal fan loops. The oscillations of the loops, studied in a recent article led by Vaibhav Pant (Indian Institute of Astrophysics), were triggered by blast waves that were generated by X-class flares from the distant active region AR 11429 (marked withthe yellow box at left). The overplotted X-ray curve in the top right corner of the image (click for the full view) shows the evolution of the flares that perturbed the footpoints of the loops. You can check out the video of the action below, and follow the link to the original article to read more about what these oscillations tell us about the Suns activity. CitationV. Pant et al 2017 ApJL 847 L5. doi:10.3847/2041-8213/aa880f

  16. Novel feature extraction method based on weight difference of weighted network for epileptic seizure detection.

    Science.gov (United States)

    Fenglin Wang; Qingfang Meng; Hong-Bo Xie; Yuehui Chen

    2014-01-01

    The extraction method of classification feature is primary and core problem in all epileptic EEG detection algorithms, since it can seriously affect the performance of the detection algorithm. In this paper, a novel epileptic EEG feature extraction method based on the statistical parameter of weighted complex network is proposed. The EEG signal is first transformed into weighted network and the weight differences of all the nodes in the network are analyzed. Then the sum of top quintile weight differences is extracted as the classification feature. At last, the extracted feature is applied to classify the epileptic EEG dataset. Experimental results show that the single feature classification based on the extracted feature obtains higher classification accuracy up to 94.75%, which indicates that the extracted feature can distinguish the ictal EEG from interictal EEG and has great potentiality of real-time epileptic seizures detection.

  17. Texture-based feature extraction using the wavelet transform on x rays

    Science.gov (United States)

    Scholl, Ingrid; Pelikan, Erich; Repges, Rudolf; Tolxdorff, Thomas

    1996-04-01

    Focal bone lesions and bone tumors are of special interest in radiology because of their rare appearance (only one percent of all tumor diseases). This motivates a computer-assisted diagnosis recognizing bone tumors. Our image analysis extracts the radiomorphologic features in x rays using a texture-based approach. Diagnosing x rays, the radiologist examines regions of different size in x rays to gain both local and global impressions of the morphologic structure. In order to analyze the x ray in different resolutions, a multiresolution approach based on the wavelet transform is applied to the radiographs. To measure the informational content of the wavelet coefficients for the individual morphologic structures, we calculated a normalized summation of the absolute wavelet coefficients within a local N by N window and called this feature the local energy. We proved in different tests this feature and the parameter for calculating the wavelet transform for a correct classification of the medical structures, applying a topologic map from Kohonen. It is shown that the wavelet transform is well suited for the feature extraction of textures.

  18. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  19. Image processing tool for automatic feature recognition and quantification

    Science.gov (United States)

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  20. Prostate cancer characterization on MR images using fractal features.

    Science.gov (United States)

    Lopes, R; Ayache, A; Makni, N; Puech, P; Villers, A; Mordon, S; Betrouni, N

    2011-01-01

    Computerized detection of prostate cancer on T2-weighted MR images. The authors combined fractal and multifractal features to perform textural analysis of the images. The fractal dimension was computed using the Variance method; the multifractal spectrum was estimated by an adaptation of a multifractional Brownian motion model. Voxels were labeled as tumor/nontumor via nonlinear supervised classification. Two classification algorithms were tested: Support vector machine (SVM) and AdaBoost. Experiments were performed on images from 17 patients. Ground truth was available from histological images. Detection and classification results (sensitivity, specificity) were (83%, 91%) and (85%, 93%) for SVM and AdaBoost, respectively. Classification using the authors' model combining fractal and multifractal features was more accurate than classification using classical texture features (such as Haralick, wavelet, and Gabor filters). Moreover, the method was more robust against signal intensity variations. Although the method was only applied to T2 images, it could be extended to multispectral MR.

  1. Automated characterization of diabetic foot using nonlinear features extracted from thermograms

    Science.gov (United States)

    Adam, Muhammad; Ng, Eddie Y. K.; Oh, Shu Lih; Heng, Marabelle L.; Hagiwara, Yuki; Tan, Jen Hong; Tong, Jasper W. K.; Acharya, U. Rajendra

    2018-03-01

    Diabetic foot is a major complication of diabetes mellitus (DM). The blood circulation to the foot decreases due to DM and hence, the temperature reduces in the plantar foot. Thermography is a non-invasive imaging method employed to view the thermal patterns using infrared (IR) camera. It allows qualitative and visual documentation of temperature fluctuation in vascular tissues. But it is difficult to diagnose these temperature changes manually. Thus, computer assisted diagnosis (CAD) system may help to accurately detect diabetic foot to prevent traumatic outcomes such as ulcerations and lower extremity amputation. In this study, plantar foot thermograms of 33 healthy persons and 33 individuals with type 2 diabetes are taken. These foot images are decomposed using discrete wavelet transform (DWT) and higher order spectra (HOS) techniques. Various texture and entropy features are extracted from the decomposed images. These combined (DWT + HOS) features are ranked using t-values and classified using support vector machine (SVM) classifier. Our proposed methodology achieved maximum accuracy of 89.39%, sensitivity of 81.81% and specificity of 96.97% using only five features. The performance of the proposed thermography-based CAD system can help the clinicians to take second opinion on their diagnosis of diabetic foot.

  2. Introduction: Feature Issue on Optical Imaging and Spectroscopy

    OpenAIRE

    Hielscher, Andreas H.; Mycek, Mary-Ann; Perelman, Lev T.

    2010-01-01

    The editors introduce the Biomedical Optics Express feature issue, “Optical Imaging and Spectroscopy,” which was a technical area at the 2010 Optical Society of America (OSA), Biomedical Optics (BIOMED) Topical Meeting held on 11–14 April in Miami, Florida. The feature issue includes 23 contributions from conference attendees.

  3. Endmember extraction algorithms from hyperspectral images

    Directory of Open Access Journals (Sweden)

    M. C. Cantero

    2006-06-01

    Full Text Available During the last years, several high-resolution sensors have been developed for hyperspectral remote sensing applications. Some of these sensors are already available on space-borne devices. Space-borne sensors are currently acquiring a continual stream of hyperspectral data, and new efficient unsupervised algorithms are required to analyze the great amount of data produced by these instruments. The identification of image endmembers is a crucial task in hyperspectral data exploitation. Once the individual endmembers have been identified, several methods can be used to map their spatial distribution, associations and abundances. This paper reviews the Pixel Purity Index (PPI, N-FINDR and Automatic Morphological Endmember Extraction (AMEE algorithms developed to accomplish the task of finding appropriate image endmembers by applying them to real hyperspectral data. In order to compare the performance of these methods a metric based on the Root Mean Square Error (RMSE between the estimated and reference abundance maps is used.

  4. Featured Image: Making Dust in the Lab

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    This remarkable photograph (which spans only 10 m across; click for a full view) reveals what happens when you form dust grains in a laboratory under conditions similar to those of interstellar space. The cosmic life cycle of dust grains is not well understood we know that in the interstellar medium (ISM), dust is destroyed at a higher rate than it is produced by stellar sources. Since the amount of dust in the ISM stays constant, however, there must be additional sources of dust production besides stars. A team of scientists led by Daniele Fulvio (Pontifical Catholic University of Rio de Janeiro and the Max Planck Institute for Astronomy at the Friedrich Schiller University Jena) have now studied formation mechanisms of dust grains in the lab by mimicking low-temperature ISM conditions and exploring how, under these conditions, carbonaceous materials condense from gas phase to form dust grains. To read more about their results and see additional images, check out the paper below.CitationDaniele Fulvio et al 2017 ApJS 233 14. doi:10.3847/1538-4365/aa9224

  5. Prostate cancer multi-feature analysis using trans-rectal ultrasound images.

    Science.gov (United States)

    Mohamed, S S; Salama, M M A; Kamel, M; El-Saadany, E F; Rizkalla, K; Chin, J

    2005-08-07

    This note focuses on extracting and analysing prostate texture features from trans-rectal ultrasound (TRUS) images for tissue characterization. One of the principal contributions of this investigation is the use of the information of the images' frequency domain features and spatial domain features to attain a more accurate diagnosis. Each image is divided into regions of interest (ROIs) by the Gabor multi-resolution analysis, a crucial stage, in which segmentation is achieved according to the frequency response of the image pixels. The pixels with a similar response to the same filter are grouped to form one ROI. Next, from each ROI two different statistical feature sets are constructed; the first set includes four grey level dependence matrix (GLDM) features and the second set consists of five grey level difference vector (GLDV) features. These constructed feature sets are then ranked by the mutual information feature selection (MIFS) algorithm. Here, the features that provide the maximum mutual information of each feature and class (cancerous and non-cancerous) and the minimum mutual information of the selected features are chosen, yielding a reduced feature subset. The two constructed feature sets, GLDM and GLDV, as well as the reduced feature subset, are examined in terms of three different classifiers: the condensed k-nearest neighbour (CNN), the decision tree (DT) and the support vector machine (SVM). The accuracy classification results range from 87.5% to 93.75%, where the performance of the SVM and that of the DT are significantly better than the performance of the CNN.

  6. Prostate cancer multi-feature analysis using trans-rectal ultrasound images

    Energy Technology Data Exchange (ETDEWEB)

    Mohamed, S S [Electrical and Computer Engineering Department, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1 (Canada); Salama, M M A [Electrical and Computer Engineering Department, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1 (Canada); Kamel, M [Electrical and Computer Engineering Department, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1 (Canada); El-Saadany, E F [Electrical and Computer Engineering Department, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1 (Canada); Rizkalla, K [University of Western Ontario, 1151 Richmond Street, Suite 2, London, Ontario N6A 5B8 (Canada); Chin, J [University of Western Ontario, 1151 Richmond Street, Suite 2, London, Ontario N6A 5B8 (Canada)

    2005-08-07

    This note focuses on extracting and analysing prostate texture features from trans-rectal ultrasound (TRUS) images for tissue characterization. One of the principal contributions of this investigation is the use of the information of the images' frequency domain features and spatial domain features to attain a more accurate diagnosis. Each image is divided into regions of interest (ROIs) by the Gabor multi-resolution analysis, a crucial stage, in which segmentation is achieved according to the frequency response of the image pixels. The pixels with a similar response to the same filter are grouped to form one ROI. Next, from each ROI two different statistical feature sets are constructed; the first set includes four grey level dependence matrix (GLDM) features and the second set consists of five grey level difference vector (GLDV) features. These constructed feature sets are then ranked by the mutual information feature selection (MIFS) algorithm. Here, the features that provide the maximum mutual information of each feature and class (cancerous and non-cancerous) and the minimum mutual information of the selected features are chosen, yeilding a reduced feature subset. The two constructed feature sets, GLDM and GLDV, as well as the reduced feature subset, are examined in terms of three different classifiers: the condensed k-nearest neighbour (CNN), the decision tree (DT) and the support vector machine (SVM). The accuracy classification results range from 87.5% to 93.75%, where the performance of the SVM and that of the DT are significantly better than the performance of the CNN. (note)

  7. NOTE: Prostate cancer multi-feature analysis using trans-rectal ultrasound images

    Science.gov (United States)

    Mohamed, S. S.; Salama, M. M. A.; Kamel, M.; El-Saadany, E. F.; Rizkalla, K.; Chin, J.

    2005-08-01

    This note focuses on extracting and analysing prostate texture features from trans-rectal ultrasound (TRUS) images for tissue characterization. One of the principal contributions of this investigation is the use of the information of the images' frequency domain features and spatial domain features to attain a more accurate diagnosis. Each image is divided into regions of interest (ROIs) by the Gabor multi-resolution analysis, a crucial stage, in which segmentation is achieved according to the frequency response of the image pixels. The pixels with a similar response to the same filter are grouped to form one ROI. Next, from each ROI two different statistical feature sets are constructed; the first set includes four grey level dependence matrix (GLDM) features and the second set consists of five grey level difference vector (GLDV) features. These constructed feature sets are then ranked by the mutual information feature selection (MIFS) algorithm. Here, the features that provide the maximum mutual information of each feature and class (cancerous and non-cancerous) and the minimum mutual information of the selected features are chosen, yeilding a reduced feature subset. The two constructed feature sets, GLDM and GLDV, as well as the reduced feature subset, are examined in terms of three different classifiers: the condensed k-nearest neighbour (CNN), the decision tree (DT) and the support vector machine (SVM). The accuracy classification results range from 87.5% to 93.75%, where the performance of the SVM and that of the DT are significantly better than the performance of the CNN.

  8. Prostate cancer multi-feature analysis using trans-rectal ultrasound images

    International Nuclear Information System (INIS)

    Mohamed, S S; Salama, M M A; Kamel, M; El-Saadany, E F; Rizkalla, K; Chin, J

    2005-01-01

    This note focuses on extracting and analysing prostate texture features from trans-rectal ultrasound (TRUS) images for tissue characterization. One of the principal contributions of this investigation is the use of the information of the images' frequency domain features and spatial domain features to attain a more accurate diagnosis. Each image is divided into regions of interest (ROIs) by the Gabor multi-resolution analysis, a crucial stage, in which segmentation is achieved according to the frequency response of the image pixels. The pixels with a similar response to the same filter are grouped to form one ROI. Next, from each ROI two different statistical feature sets are constructed; the first set includes four grey level dependence matrix (GLDM) features and the second set consists of five grey level difference vector (GLDV) features. These constructed feature sets are then ranked by the mutual information feature selection (MIFS) algorithm. Here, the features that provide the maximum mutual information of each feature and class (cancerous and non-cancerous) and the minimum mutual information of the selected features are chosen, yeilding a reduced feature subset. The two constructed feature sets, GLDM and GLDV, as well as the reduced feature subset, are examined in terms of three different classifiers: the condensed k-nearest neighbour (CNN), the decision tree (DT) and the support vector machine (SVM). The accuracy classification results range from 87.5% to 93.75%, where the performance of the SVM and that of the DT are significantly better than the performance of the CNN. (note)

  9. Perinatal clinical and imaging features of CLOVES syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Pineda, Israel [Virgen del Rocio Children' s Hospital, Department of Pediatric Surgery, Seville (Spain); Fajardo, Manuel [Virgen del Rocio Children' s Hospital, Department of Pediatric Radiology, Seville (Spain); Chaudry, Gulraiz; Alomari, Ahmad I. [Children' s Hospital Boston and Harvard Medical School, Division of Vascular and Interventional Radiology, Boston, MA (United States)

    2010-08-15

    We report a neonate with antenatal imaging features suggestive of CLOVES syndrome. Postnatal clinical and imaging findings confirmed the diagnosis, with the constellation of truncal overgrowth, cutaneous capillary malformation, lymphatic and musculoskeletal anomalies. The clinical, radiological and histopathological findings noted in this particular phenotype help differentiate it from other overgrowth syndromes with complex vascular anomalies. (orig.)

  10. Imaging features of brain tuberculoma in Tanzania: case report and ...

    African Journals Online (AJOL)

    She underwent CT and MR imaging where multiple enhancing lesions were revealed in the brain parenchyma. The features of tuberculoma on CT and MR imaging may mimic the appearance of several other brain lesions. Histological diagnosis of tuberculoma was obtained. In areas where tuberculosis is endemic, the ...

  11. Disorders of the pediatric pancreas: imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Nijs, Els [University Hospital Gasthuisberg, Department of Radiology, Leuven (Belgium); Callahan, Michael J.; Taylor, George A. [Boston Children' s Hospital, Department of Radiology, Boston, MA (United States)

    2005-04-01

    The purpose of this manuscript is to provide an overview of the normal development of the pancreas as well as pancreatic pathology in children. Diagnostic imaging plays a major role in the evaluation of the pancreas in infants and children. Familiarity with the range of normal appearance and the diseases that commonly affect this gland is important for the accurate and timely diagnosis of pancreatic disorders in the pediatric population. Normal embryology is discussed, as are the most common congenital anomalies that occur as a result of aberrant development during embryology. These include pancreas divisum, annular pancreas, agenesis of the dorsal pancreatic anlagen and ectopic pancreatic tissue. Syndromes that can manifest pancreatic pathology include: Beckwith Wiedemann syndrome, von Hippel-Lindau disease and autosomal dominant polycystic kidney disease. Children and adults with cystic fibrosis and Shwachman-Diamond syndrome frequently present with pancreatic insufficiency. Trauma is the most common cause of pancreatitis in children. In younger children, unexplained pancreatic injury must always alert the radiologist to potential child abuse. Pancreatic pseudocysts are a complication of trauma, but can also be seen in the setting of acute or chronic pancreatitis from other causes. Primary pancreatic neoplasms are rare in children and are divided into exocrine tumors such as pancreatoblastoma and adenocarcinoma and into endocrine or islet cell tumors. Islet cell tumors are classified as functioning (insulinoma, gastrinoma, VIPoma and glucagonoma) and nonfunctioning tumors. Solid-cystic papillary tumor is probably the most common pancreatic tumor in Asian children. Although quite rare, secondary tumors of the pancreas can be associated with certain primary malignancies. (orig.)

  12. Disorders of the pediatric pancreas: imaging features

    International Nuclear Information System (INIS)

    Nijs, Els; Callahan, Michael J.; Taylor, George A.

    2005-01-01

    The purpose of this manuscript is to provide an overview of the normal development of the pancreas as well as pancreatic pathology in children. Diagnostic imaging plays a major role in the evaluation of the pancreas in infants and children. Familiarity with the range of normal appearance and the diseases that commonly affect this gland is important for the accurate and timely diagnosis of pancreatic disorders in the pediatric population. Normal embryology is discussed, as are the most common congenital anomalies that occur as a result of aberrant development during embryology. These include pancreas divisum, annular pancreas, agenesis of the dorsal pancreatic anlagen and ectopic pancreatic tissue. Syndromes that can manifest pancreatic pathology include: Beckwith Wiedemann syndrome, von Hippel-Lindau disease and autosomal dominant polycystic kidney disease. Children and adults with cystic fibrosis and Shwachman-Diamond syndrome frequently present with pancreatic insufficiency. Trauma is the most common cause of pancreatitis in children. In younger children, unexplained pancreatic injury must always alert the radiologist to potential child abuse. Pancreatic pseudocysts are a complication of trauma, but can also be seen in the setting of acute or chronic pancreatitis from other causes. Primary pancreatic neoplasms are rare in children and are divided into exocrine tumors such as pancreatoblastoma and adenocarcinoma and into endocrine or islet cell tumors. Islet cell tumors are classified as functioning (insulinoma, gastrinoma, VIPoma and glucagonoma) and nonfunctioning tumors. Solid-cystic papillary tumor is probably the most common pancreatic tumor in Asian children. Although quite rare, secondary tumors of the pancreas can be associated with certain primary malignancies. (orig.)

  13. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    Science.gov (United States)

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  14. Significance of the impact of motion compensation on the variability of PET image features.

    Science.gov (United States)

    Carles, M; Bach, T; Torres-Espallardo, I; Baltas, D; Nestle, U; Martí-Bonmatí, L

    2018-03-21

    In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40[Formula: see text] of SUV max and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40[Formula: see text] contours, despite the values not being interchangeable, all image features showed strong linear correlations (r  >  0.91, [Formula: see text]). Across the time-frames of 4D-PET, all image features followed a normal distribution in

  15. Significance of the impact of motion compensation on the variability of PET image features

    Science.gov (United States)

    Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.

    2018-03-01

    In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r  >  0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the

  16. Memory-efficient architecture for hysteresis thresholding and object feature extraction.

    Science.gov (United States)

    Najjar, Mayssaa A; Karlapudi, Swetha; Bayoumi, Magdy A

    2011-12-01

    Hysteresis thresholding is a method that offers enhanced object detection. Due to its recursive nature, it is time consuming and requires a lot of memory resources. This makes it avoided in streaming processors with limited memory. We propose two versions of a memory-efficient and fast architecture for hysteresis thresholding: a high-accuracy pixel-based architecture and a faster block-based one at the expense of some loss in the accuracy. Both designs couple thresholding with connected component analysis and feature extraction in a single pass over the image. Unlike queue-based techniques, the proposed scheme treats candidate pixels almost as foreground until objects complete; a decision is then made to keep or discard these pixels. This allows processing on the fly, thus avoiding additional passes for handling candidate pixels and extracting object features. Moreover, labels are reused so only one row of compact labels is buffered. Both architectures are implemented in MATLAB and VHDL. Simulation results on a set of real and synthetic images show that the execution speed can attain an average increase up to 24× for the pixel-based and 52× for the block-based when compared to state-of-the-art techniques. The memory requirements are also drastically reduced by about 99%. © 2011 IEEE

  17. Soft sensor design by multivariate fusion of image features and process measurements

    DEFF Research Database (Denmark)

    Lin, Bao; Jørgensen, Sten Bay

    2011-01-01

    is obtained by filtering the original data block augmented with time lagged variables such that improved predictive performance of the quality variable results. Key issues regarding data preprocessing and extraction of suitable image features are discussed with a case study, the on-line estimation of nitrogen......This paper presents a multivariate data fusion procedure for design of dynamic soft sensors where suitably selected image features are combined with traditional process measurements to enhance the performance of data-driven soft sensors. A key issue of fusing multiple sensor data, i.e. to determine...

  18. An age estimation method using brain local features for T1-weighted images.

    Science.gov (United States)

    Kondo, Chihiro; Ito, Koichi; Kai Wu; Sato, Kazunori; Taki, Yasuyuki; Fukuda, Hiroshi; Aoki, Takafumi

    2015-08-01

    Previous statistical analysis studies using large-scale brain magnetic resonance (MR) image databases have examined that brain tissues have age-related morphological changes. This fact indicates that one can estimate the age of a subject from his/her brain MR image by evaluating morphological changes with healthy aging. This paper proposes an age estimation method using local features extracted from T1-weighted MR images. The brain local features are defined by volumes of brain tissues parcellated into local regions defined by the automated anatomical labeling atlas. The proposed method selects optimal local regions to improve the performance of age estimation. We evaluate performance of the proposed method using 1,146 T1-weighted images from a Japanese MR image database. We also discuss the medical implication of selected optimal local regions.

  19. PyEEG: an open source Python module for EEG/MEG feature extraction.

    Science.gov (United States)

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  20. iFeature: a python package and web server for features extraction and selection from protein and peptide sequences.

    Science.gov (United States)

    Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning

    2018-03-08

    Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.

  1. Remote Sensing Image Fusion Based on Enhancement of Edge Feature Information

    Directory of Open Access Journals (Sweden)

    Yang Song

    2014-03-01

    Full Text Available A new image fusion algorithm of the multispectral image and the panchromatic image is proposed by using the non-subsampled contourlet transform and the lab color space. The non- subsampled contourlet transform is used to decompose an image into a low frequency approximate component and several high frequency detail components, and an edge enhancement method is employed to extract features from a high resolution image. For keeping the spectral little changing when image fusion, the lab color space, which is a new color space that simulates the visual perception of human, is adopted in this paper. Experimental results indicate that this proposed algorithm can obtain a fusion image which has more rich details.

  2. Extraction and Recognition of Nonlinear Interval-Type Features Using Symbolic KDA Algorithm with Application to Face Recognition

    Directory of Open Access Journals (Sweden)

    P. S. Hiremath

    2008-01-01

    recognition in the framework of symbolic data analysis. Classical KDA extracts features, which are single-valued in nature to represent face images. These single-valued variables may not be able to capture variation of each feature in all the images of same subject; this leads to loss of information. The symbolic KDA algorithm extracts most discriminating nonlinear interval-type features which optimally discriminate among the classes represented in the training set. The proposed method has been successfully tested for face recognition using two databases, ORL database and Yale face database. The effectiveness of the proposed method is shown in terms of comparative performance against popular face recognition methods such as kernel Eigenface method and kernel Fisherface method. Experimental results show that symbolic KDA yields improved recognition rate.

  3. Online Feature Selection for Classifying Emphysema in HRCT Images

    Directory of Open Access Journals (Sweden)

    M. Prasad

    2008-06-01

    Full Text Available Feature subset selection, applied as a pre- processing step to machine learning, is valuable in dimensionality reduction, eliminating irrelevant data and improving classifier performance. In the classic formulation of the feature selection problem, it is assumed that all the features are available at the beginning. However, in many real world problems, there are scenarios where not all features are present initially and must be integrated as they become available. In such scenarios, online feature selection provides an efficient way to sort through a large space of features. It is in this context that we introduce online feature selection for the classification of emphysema, a smoking related disease that appears as low attenuation regions in High Resolution Computer Tomography (HRCT images. The technique was successfully evaluated on 61 HRCT scans and compared with different online feature selection approaches, including hill climbing, best first search, grafting, and correlation-based feature selection. The results were also compared against ldensity maskr, a standard approach used for emphysema detection in medical image analysis.

  4. Feature maps driven no-reference image quality prediction of authentically distorted images

    Science.gov (United States)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  5. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  6. Global Contrast Enhancement Based Image Forensics Using Statistical Features

    Directory of Open Access Journals (Sweden)

    Neetu Singh

    2017-01-01

    Full Text Available The evolution of modern cameras, mobile phones equipped with sophisticated image editing software has revolutionized digital imaging. In the process of image editing, contrast enhancement is a very common technique to hide visual traces of tampering. In our work, we have employed statistical distribution of block variance and AC DCT coefficients of an image to detect global contrast enhancement in an image. The variation in statistical parameters of block variance and AC DCT coefficients distribution for different degrees of contrast enhancement are used as features to detect contrast enhancement. An SVM classifier with 10-fold cross-validation is employed. An overall accuracy greater than 99% in detection with false rate less than 2% has been achieved. The proposed method is novel and it can be applied to uncompressed, previously JPEG compressed and post enhancement JPEG compressed images with high accuracy. The proposed method does not employ oft-repeated image histogram-based approach.

  7. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    Science.gov (United States)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  8. Multiscale Object Recognition and Feature Extraction Using Wavelet Networks

    National Research Council Canada - National Science Library

    Jaggi, Seema; Karl, W. C; Krim, Hamid; Willsky, Alan S

    1995-01-01

    In this work we present a novel method of object recognition and feature generation based on multiscale object descriptions obtained using wavelet networks in combination with morphological filtering...

  9. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    Science.gov (United States)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  10. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    Science.gov (United States)

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  11. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    Directory of Open Access Journals (Sweden)

    Zichun Zhong

    2016-01-01

    Full Text Available By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  12. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  13. Associations Between Spondyloarthritis Features and Magnetic Resonance Imaging Findings

    DEFF Research Database (Denmark)

    Arnbak, Bodil; Grethe Jurik, Anne; Hørslev-Petersen, Kim

    2016-01-01

    were 1) to estimate the prevalence of magnetic resonance imaging (MRI) findings and clinical features included in the ASAS criteria for SpA and 2) to explore the associations between MRI findings and clinical features. METHODS: We included patients ages 18-40 years with persistent low back pain who had...... been referred to the Spine Centre of Southern Denmark. We collected information on clinical features (including HLA-B27 and high-sensitivity C-reactive protein) and MRI findings in the spine and sacroiliac (SI) joints. RESULTS: Of 1,020 included patients, 537 (53%) had at least 1 of the clinical...... features included in the ASAS criteria for SpA. Three clinical features were common-inflammatory back pain according to the ASAS criteria, a good response to nonsteroidal antiinflammatory drugs (NSAIDs), and family history of SpA. The prevalence of these features ranged from 15% to 17%. Sacroiliitis on MRI...

  14. Learning image descriptors for matching based on Haar features

    Science.gov (United States)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2014-08-01

    This paper presents a new and fast binary descriptor for image matching learned from Haar features. The training uses AdaBoost; the weak learner is built on response function for Haar features, instead of histogram-type features. The weak classifier is selected from a large weak feature pool. The selected features have different feature type, scale and position within the patch, having correspond threshold value for weak classifiers. Besides, to cope with the fact in real matching that dissimilar matches are encountered much more often than similar matches, cascaded classifiers are trained to motivate training algorithms see a large number of dissimilar patch pairs. The final trained output are binary value vectors, namely descriptors, with corresponding weight and perceptron threshold for a strong classifier in every stage. We present preliminary results which serve as a proof-of-concept of the work.

  15. Using image quality measures and features to choose good images for classification of ISAR imagery

    CSIR Research Space (South Africa)

    Steyn, JM

    2014-10-01

    Full Text Available of the ISAR images generated by the sensor really provides the most useful information for classification. This paper proposes multiple quality measures (QM) to automatically select ISAR images that carry good classification information. These features...

  16. FEVER : Extracting Feature-oriented Changes from Commits

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2016-01-01

    The study of the evolution of highly configurable systems requires a thorough understanding of thee core ingredients of such systems: (1) the underlying variability model; (2) the assets that together implement the configurable features; and (3) the mapping from variable features to actual assets.

  17. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  18. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  19. Feature curve extraction from point clouds via developable strip intersection

    Directory of Open Access Journals (Sweden)

    Kai Wah Lee

    2016-04-01

    Full Text Available In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.

  20. Feature matching method study for uncorrected fish-eye lens image

    Science.gov (United States)

    Zhang, Baofeng; Jia, Yanhui; Röning, Juha; Feng, Weijia

    2015-01-01

    Because of the further from the center of image the lower resolution and the severe non-linear distortion are the characteristics of uncorrected fish-eye lens image, the traditional feature matching method can't achieve good performance in the applications of fish-eye lens, which correct distortion firstly and then matches the features in image. Center-symmetric Local Binary Pattern (CS-LBP) is a kind of descriptor based on grayscale information from neighborhood, which has high ability of grayscale invariance and rotation invariance. In this paper, CS-LBP will be combined with Scale Invariant Feature Transform (SIFT) to solve the problem of feature point matching on uncorrected fish-eye image. We first extract the interest points in the pair of fish-eye images by SIFT, and then describe the corresponding regions of the interest points through CS-LBP. Finally the similarity of the regions will be evaluated using the chi-square distance to get the only pair of points. For the specified interest point, the corresponding point in another image can be found out. The experimental results show that the proposed method achieves a satisfying matching performance in uncorrected fish-eye lens image. The study of this article will be useful to enhance the applications of fish-eye lens in the field of 3D reconstruction and panorama restoration.

  1. A holistic image segmentation framework for cloud detection and extraction

    Science.gov (United States)

    Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe

    2013-05-01

    Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.

  2. Individual Building Extraction from TerraSAR-X Images Based on Ontological Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Rong Gui

    2016-08-01

    Full Text Available Accurate building information plays a crucial role for urban planning, human settlements and environmental management. Synthetic aperture radar (SAR images, which deliver images with metric resolution, allow for analyzing and extracting detailed information on urban areas. In this paper, we consider the problem of extracting individual buildings from SAR images based on domain ontology. By analyzing a building scattering model with different orientations and structures, the building ontology model is set up to express multiple characteristics of individual buildings. Under this semantic expression framework, an object-based SAR image segmentation method is adopted to provide homogeneous image objects, and three categories of image object features are extracted. Semantic rules are implemented by organizing image object features, and the individual building objects expression based on an ontological semantic description is formed. Finally, the building primitives are used to detect buildings among the available image objects. Experiments on TerraSAR-X images of Foshan city, China, with a spatial resolution of 1.25 m × 1.25 m, have shown the total extraction rates are above 84%. The results indicate the ontological semantic method can exactly extract flat-roof and gable-roof buildings larger than 250 pixels with different orientations.

  3. Discriminatively learning for representing local image features with quadruplet model

    Science.gov (United States)

    Zhang, Da-long; Zhao, Lei; Xu, Duan-qing; Lu, Dong-ming

    2017-11-01

    Traditional hand-crafted features for representing local image patches are evolving into current data-driven and learning-based image feature, but learning a robust and discriminative descriptor which is capable of controlling various patch-level computer vision tasks is still an open problem. In this work, we propose a novel deep convolutional neural network (CNN) to learn local feature descriptors. We utilize the quadruplets with positive and negative training samples, together with a constraint to restrict the intra-class variance, to learn good discriminative CNN representations. Compared with previous works, our model reduces the overlap in feature space between corresponding and non-corresponding patch pairs, and mitigates margin varying problem caused by commonly used triplet loss. We demonstrate that our method achieves better embedding result than some latest works, like PN-Net and TN-TG, on benchmark dataset.

  4. Imaging of Groin Pain: Magnetic Resonance and Ultrasound Imaging Features.

    Science.gov (United States)

    Lee, Susan C; Endo, Yoshimi; Potter, Hollis G

    Evaluation of groin pain in athletes may be challenging as pain is typically poorly localized and the pubic symphyseal region comprises closely approximated tendons and muscles. As such, magnetic resonance imaging (MRI) and ultrasound (US) may help determine the etiology of groin pain. A PubMed search was performed using the following search terms: ultrasound, magnetic resonance imaging, sports hernia, athletic pubalgia, and groin pain. Date restrictions were not placed on the literature search. Clinical review. Level 4. MRI is sensitive in diagnosing pathology in groin pain. Not only can MRI be used to image rectus abdominis/adductor longus aponeurosis and pubic bone pathology, but it can also evaluate other pathology within the hip and pelvis. MRI is especially helpful when groin pain is poorly localized. Real-time capability makes ultrasound useful in evaluating the pubic symphyseal region, as it can be used for evaluation and treatment. MRI and US are valuable in diagnosing pathology in athletes with groin pain, with the added utility of treatment using US-guided intervention. Strength-of Recommendation Taxonomy: C.

  5. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images

    Science.gov (United States)

    Gong, Maoguo; Yang, Hailun; Zhang, Puzhao

    2017-07-01

    Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.

  6. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Imaging features of maxillary osteoblastoma and its malignant transformation

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Hiroshi [Dept. of Radiology, Nagasaki Univ. School of Dentistry, Nagasaki (Japan); Ariji, Ei-ichiro [Dept. of Radiology, Nagasaki Univ. School of Dentistry, Nagasaki (Japan); Tanaka, Takemasa [Dept. of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kyushu Univ., Fukuoka (Japan); Kanda, Shigenobu [Dept. of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kyushu Univ., Fukuoka (Japan); Mori, Shin-ichiro [Dept. of Dental Radiology, Fukuoka Dental Coll., Fukuoka (Japan); Goto, Masaaki [Dept. of Oral and Maxillofacial Surgery, Saga Medical Scholl, Saga (Japan); Mizuno, Akio [First Dept. of Oral Surgery, Nagasaki Univ. School of Dentistry, Nagasaki (Japan); Okabe, Haruo [Dept. of Oral Pathology, Nagasaki Univ. School of Dentistry, Nagasaki (Japan); Nakamura, Takashi [Dept. of Radiology, Nagasaki Univ. School of Dentistry, Nagasaki (Japan)

    1994-10-01

    We report two cases of osteoblastoma, one of them an unusual case in a 32-year-old woman in whom a maxillary tumor was confidently diagnosed as an osteoblastoma at the time of primary excision and subsequently transformed into an osteosarcoma 7 years after the onset of clinical symptoms. The other patient developed osteosarcoma arising in the maxilla, which was diagnosed 3 years after the primary excision and is very suggestive of malignant transformation in osteoblastoma. We present the radiological features, including computed tomographic and magnetic resonance imaging studies, of this unusual event of transformed tumor and compare imaging features of benign and dedifferentiated counterparts of this rare tumor complex. (orig.)

  8. Feature Recognition of Froth Images Based on Energy Distribution Characteristics

    Directory of Open Access Journals (Sweden)

    WU Yanpeng

    2014-09-01

    Full Text Available This paper proposes a determining algorithm for froth image features based on the amplitude spectrum energy statistics by applying Fast Fourier Transformation to analyze the energy distribution of various-sized froth. The proposed algorithm has been used to do a froth feature analysis of the froth images from the alumina flotation processing site, and the results show that the consistency rate reaches 98.1 % and the usability rate 94.2 %; with its good robustness and high efficiency, the algorithm is quite suitable for flotation processing state recognition.

  9. Imaging features of maxillary osteoblastoma and its malignant transformation

    International Nuclear Information System (INIS)

    Ueno, Hiroshi; Ariji, Ei-ichiro; Tanaka, Takemasa; Kanda, Shigenobu; Mori, Shin-ichiro; Goto, Masaaki; Mizuno, Akio; Okabe, Haruo; Nakamura, Takashi

    1994-01-01

    We report two cases of osteoblastoma, one of them an unusual case in a 32-year-old woman in whom a maxillary tumor was confidently diagnosed as an osteoblastoma at the time of primary excision and subsequently transformed into an osteosarcoma 7 years after the onset of clinical symptoms. The other patient developed osteosarcoma arising in the maxilla, which was diagnosed 3 years after the primary excision and is very suggestive of malignant transformation in osteoblastoma. We present the radiological features, including computed tomographic and magnetic resonance imaging studies, of this unusual event of transformed tumor and compare imaging features of benign and dedifferentiated counterparts of this rare tumor complex. (orig.)

  10. Imaging features of mycobacterium in patients with acquired immunodeficiency syndrome

    International Nuclear Information System (INIS)

    Yang Jun; Sun Yue; Wei Liangui; Xu Yunliang; Li Xingwang

    2013-01-01

    Objective: To analyze the imaging features of mycobacterium in AIDS patients. Methods: Twenty-three cases of mycobacterium tuberculosis and 13 patients of non-tuberculous mycobacteria were proved etiologically and included in this study. All patients underwent X-ray and CT examinations, imaging data were analyzed and compared. Results: The imaging findings of mycobacterium tuberculosis in AIDS patients included consolidation (n = 11), pleural effusion (n = 11), mediastinal lymphadenopathy (n = 11). Pulmonary lesions were always diffuse distribution, and 14 patients of extrapulmonary tuberculosis were found. Pulmonary lesions in non-tuberculous mycobacteria tend to be circumscribed. Conclusions: Non-tuberculous mycobacterial infection in AIDS patients is more common and usually combined with other infections. Imaging features are atypical. (authors)

  11. Deep features for efficient multi-biometric recognition with face and ear images

    Science.gov (United States)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  12. Temporal abstraction for feature extraction: a comparative case study in prediction from intensive care monitoring data

    NARCIS (Netherlands)

    Verduijn, Marion; Sacchi, Lucia; Peek, Niels; Bellazzi, Riccardo; de Jonge, Evert; de Mol, Bas A. J. M.

    2007-01-01

    OBJECTIVES: To compare two temporal abstraction procedures for the extraction of meta features from monitoring data. Feature extraction prior to predictive modeling is a common strategy in prediction from temporal data. A fundamental dilemma in this strategy, however, is the extent to which the

  13. Modality prediction of biomedical literature images using multimodal feature representation

    Directory of Open Access Journals (Sweden)

    Pelka, Obioma

    2016-08-01

    Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.

  14. A New Approach to Urban Road Extraction Using High-Resolution Aerial Image

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2016-07-01

    Full Text Available Road information is fundamental not only in the military field but also common daily living. Automatic road extraction from a remote sensing images can provide references for city planning as well as transportation database and map updating. However, owing to the spectral similarity between roads and impervious structures, the current methods solely using spectral characteristics are often ineffective. By contrast, the detailed information discernible from the high-resolution aerial images enables road extraction with spatial texture features. In this study, a knowledge-based method is established and proposed; this method incorporates the spatial texture feature into urban road extraction. The spatial texture feature is initially extracted by the local Moran’s I, and the derived texture is added to the spectral bands of image for image segmentation. Subsequently, features like brightness, standard deviation, rectangularity, aspect ratio, and area are selected to form the hypothesis and verification model based on road knowledge. Finally, roads are extracted by applying the hypothesis and verification model and are post-processed based on the mathematical morphology. The newly proposed method is evaluated by conducting two experiments. Results show that the completeness, correctness, and quality of the results could reach approximately 94%, 90% and 86% respectively, indicating that the proposed method is effective for urban road extraction.

  15. Automated Feature Extraction from Hyperspectral Imagery, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  16. Automated registration of freehand B-mode ultrasound and magnetic resonance imaging of the carotid arteries based on geometric features

    DEFF Research Database (Denmark)

    Carvalho, Diego D. B.; Arias Lorza, Andres Mauricio; Niessen, Wiro J.

    2017-01-01

    An automated method for registering B-mode ultrasound (US) and magnetic resonance imaging (MRI) of the carotid arteries is proposed. The registration uses geometric features, namely, lumen centerlines and lumen segmentations, which are extracted fully automatically from the images after manual an...

  17. Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation

    Science.gov (United States)

    Maji, Pradipta; Roy, Shaswati

    2015-01-01

    Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961

  18. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Science.gov (United States)

    Maji, Pradipta; Roy, Shaswati

    2015-01-01

    Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  19. Study on edge-extraction of remote sensing image

    International Nuclear Information System (INIS)

    Wen Jianguang; Xiao Qing; Xu Huiping

    2005-01-01

    Image edge-extraction is an important step in image processing and recognition, and also a hot spot in science study. In this paper, based on primary methods of the remote sensing image edge-extraction, authors, for the first time, have proposed several elements which should be considered before processing. Then, the qualities of several methods in remote sensing image edge-extraction are systematically summarized. At last, taking Near Nasca area (Peru) as an example the edge-extraction of Magmatic Range is analysed. (authors)

  20. Singular Value Decomposition Based Features for Automatic Tumor Detection in Wireless Capsule Endoscopy Images

    Directory of Open Access Journals (Sweden)

    Vahid Faghih Dinevari

    2016-01-01

    Full Text Available Wireless capsule endoscopy (WCE is a new noninvasive instrument which allows direct observation of the gastrointestinal tract to diagnose its relative diseases. Because of the large number of images obtained from the capsule endoscopy per patient, doctors need too much time to investigate all of them. So, it would be worthwhile to design a system for detecting diseases automatically. In this paper, a new method is presented for automatic detection of tumors in the WCE images. This method will utilize the advantages of the discrete wavelet transform (DWT and singular value decomposition (SVD algorithms to extract features from different color channels of the WCE images. Therefore, the extracted features are invariant to rotation and can describe multiresolution characteristics of the WCE images. In order to classify the WCE images, the support vector machine (SVM method is applied to a data set which includes 400 normal and 400 tumor WCE images. The experimental results show proper performance of the proposed algorithm for detection and isolation of the tumor images which, in the best way, shows 94%, 93%, and 93.5% of sensitivity, specificity, and accuracy in the RGB color space, respectively.

  1. Singular Value Decomposition Based Features for Automatic Tumor Detection in Wireless Capsule Endoscopy Images.

    Science.gov (United States)

    Faghih Dinevari, Vahid; Karimian Khosroshahi, Ghader; Zolfy Lighvan, Mina

    2016-01-01

    Wireless capsule endoscopy (WCE) is a new noninvasive instrument which allows direct observation of the gastrointestinal tract to diagnose its relative diseases. Because of the large number of images obtained from the capsule endoscopy per patient, doctors need too much time to investigate all of them. So, it would be worthwhile to design a system for detecting diseases automatically. In this paper, a new method is presented for automatic detection of tumors in the WCE images. This method will utilize the advantages of the discrete wavelet transform (DWT) and singular value decomposition (SVD) algorithms to extract features from different color channels of the WCE images. Therefore, the extracted features are invariant to rotation and can describe multiresolution characteristics of the WCE images. In order to classify the WCE images, the support vector machine (SVM) method is applied to a data set which includes 400 normal and 400 tumor WCE images. The experimental results show proper performance of the proposed algorithm for detection and isolation of the tumor images which, in the best way, shows 94%, 93%, and 93.5% of sensitivity, specificity, and accuracy in the RGB color space, respectively.

  2. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Passive Forensics for Region Duplication Image Forgery Based on Harris Feature Points and Local Binary Patterns

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2013-01-01

    Full Text Available Nowadays the demand for identifying the authenticity of an image is much increased since advanced image editing software packages are widely used. Region duplication forgery is one of the most common and immediate tampering attacks which are frequently used. Several methods to expose this forgery have been developed to detect and locate the tampered region, while most methods do fail when the duplicated region undergoes rotation or flipping before being pasted. In this paper, an efficient method based on Harris feature points and local binary patterns is proposed. First, the image is filtered with a pixelwise adaptive Wiener method, and then dense Harris feature points are employed in order to obtain a sufficient number of feature points with approximately uniform distribution. Feature vectors for a circle patch around each feature point are extracted using local binary pattern operators, and the similar Harris points are matched based on their representation feature vectors using the BBF algorithm. Finally, RANSAC algorithm is employed to eliminate the possible erroneous matches. Experiment results demonstrate that the proposed method can effectively detect region duplication forgery, even when an image was distorted by rotation, flipping, blurring, AWGN, JPEG compression, and their mixed operations, especially resistant to the forgery with the flat area of little visual structures.

  4. Quantitative Imaging Features and Postoperative Hepatic Insufficiency: A Multi-Institutional Expanded Cohort.

    Science.gov (United States)

    Pak, Linda M; Chakraborty, Jayasree; Gonen, Mithat; Chapman, William C; Do, Richard Kg; Koerkamp, Bas Groot; Verhoef, Kees; Lee, Ser Yee; Massani, Marco; van der Stok, Eric P; Simpson, Amber L

    2018-02-14

    Post-hepatectomy liver insufficiency (PHLI) is a significant cause of morbidity and mortality after liver resection. Quantitative imaging analysis using CT scans measures variations in pixel intensity related to perfusion. A preliminary study demonstrated a correlation between quantitative imaging features of the future liver remnant (FLR) parenchyma from preoperative CT scans and PHLI. The objective of the present study was to explore the potential application of quantitative imaging analysis in PHLI in an expanded, multi-institutional cohort. Patients were retrospectively identified from five high-volume academic centers that developed PHLI after major hepatectomy and were matched to control patients without PHLI (by extent of resection, pre-operative chemotherapy treatment, age (±5 years), and sex). Quantitative imaging features were extracted from the FLR in the preoperative CT scan, and the most discriminatory features were identified using conditional logistic regression. %RLV was defined as follows: (FLR volume)/(total liver volume)x100. Significant clinical and imaging features were combined in a multivariate analysis using conditional logistic regression. From 2000 to 2015, 74 patients with PHLI and 74 matched controls were identified. The most common indications for surgery were colorectal liver metastases (53%), hepatocellular carcinoma (37%), and cholangiocarcinoma (9%). Two CT imaging features (FD1_4: image complexity; ACM1_10: spatial distribution of pixel intensity) were strongly associated with PHLI and remained associated with PHLI on multivariate analysis (p=0.018 and p=0.023, respectively), independent of clinical variables, including preoperative bilirubin and %RLV. Quantitative imaging features are independently associated with PHLI and are a promising preoperative risk stratification tool. Copyright © 2018. Published by Elsevier Inc.

  5. Computer Aided Quantification of Pathological Features for Flexor Tendon Pulleys on Microscopic Images

    Directory of Open Access Journals (Sweden)

    Yung-Chun Liu

    2013-01-01

    Full Text Available Quantifying the pathological features of flexor tendon pulleys is essential for grading the trigger finger since it provides clinicians with objective evidence derived from microscopic images. Although manual grading is time consuming and dependent on the observer experience, there is a lack of image processing methods for automatically extracting pulley pathological features. In this paper, we design and develop a color-based image segmentation system to extract the color and shape features from pulley microscopic images. Two parameters which are the size ratio of abnormal tissue regions and the number ratio of abnormal nuclei are estimated as the pathological progression indices. The automatic quantification results show clear discrimination among different levels of diseased pulley specimens which are prone to misjudgments for human visual inspection. The proposed system provides a reliable and automatic way to obtain pathological parameters instead of manual evaluation which is with intra- and interoperator variability. Experiments with 290 microscopic images from 29 pulley specimens show good correspondence with pathologist expectations. Hence, the proposed system has great potential for assisting clinical experts in routine histopathological examinations.

  6. MULTI-SOURCE HIERARCHICAL CONDITIONAL RANDOM FIELD MODEL FOR FEATURE FUSION OF REMOTE SENSING IMAGES AND LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2013-05-01

    Full Text Available Feature fusion of remote sensing images and LiDAR points cloud data, which have strong complementarity, can effectively play the advantages of multi-class features to provide more reliable information support for the remote sensing applications, such as object classification and recognition. In this paper, we introduce a novel multi-source hierarchical conditional random field (MSHCRF model to fuse features extracted from remote sensing images and LiDAR data for image classification. Firstly, typical features are selected to obtain the interest regions from multi-source data, then MSHCRF model is constructed to exploit up the features, category compatibility of images and the category consistency of multi-source data based on the regions, and the outputs of the model represents the optimal results of the image classification. Competitive results demonstrate the precision and robustness of the proposed method.

  7. Moving Target Information Extraction Based on Single Satellite Image

    Directory of Open Access Journals (Sweden)

    ZHAO Shihu

    2015-03-01

    Full Text Available The spatial and time variant effects in high resolution satellite push broom imaging are analyzed. A spatial and time variant imaging model is established. A moving target information extraction method is proposed based on a single satellite remote sensing image. The experiment computes two airplanes' flying speed using ZY-3 multispectral image and proves the validity of spatial and time variant model and moving information extracting method.

  8. Feature extraction and classifcation in surface grading application using multivariate statistical projection models

    Science.gov (United States)

    Prats-Montalbán, José M.; López, Fernando; Valiente, José M.; Ferrer, Alberto

    2007-01-01

    In this paper we present an innovative way to simultaneously perform feature extraction and classification for the quality control issue of surface grading by applying two well known multivariate statistical projection tools (SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and predictions computed from the extracted projection models. Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to compute accuracy estimates and study the factors effect. The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These results outperform those obtained in a previous work where the soft color texture descriptors in combination with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.

  9. An Effective Fault Feature Extraction Method for Gas Turbine Generator System Diagnosis

    Directory of Open Access Journals (Sweden)

    Jian-Hua Zhong

    2016-01-01

    Full Text Available Fault diagnosis is very important to maintain the operation of a gas turbine generator system (GTGS in power plants, where any abnormal situations will interrupt the electricity supply. The fault diagnosis of the GTGS faces the main challenge that the acquired data, vibration or sound signals, contain a great deal of redundant information which extends the fault identification time and degrades the diagnostic accuracy. To improve the diagnostic performance in the GTGS, an effective fault feature extraction framework is proposed to solve the problem of the signal disorder and redundant information in the acquired signal. The proposed framework combines feature extraction with a general machine learning method, support vector machine (SVM, to implement an intelligent fault diagnosis. The feature extraction method adopts wavelet packet transform and time-domain statistical features to extract the features of faults from the vibration signal. To further reduce the redundant information in extracted features, kernel principal component analysis is applied in this study. Experimental results indicate that the proposed feature extracted technique is an effective method to extract the useful features of faults, resulting in improvement of the performance of fault diagnosis for the GTGS.

  10. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  11. MR Imaging Features of Obturator Internus Bursa of the Hip

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ji Young; Lee, Sun Wha; Kim, Jong Oh [School of Medicine, Ewha Womans University, Seoul (Korea, Republic of)

    2008-08-15

    The authors report two cases with distension of the obturator internus bursa identified on MR images, and describe the location and characteristic features of obturator internus bursitis; the 'boomerang'-shaped fluid distension between the obturator internus tendon and the posterior grooved surface of the ischium

  12. Identification and Quantification Soil Redoximorphic Features by Digital Image Processing

    Science.gov (United States)

    Soil redoximorphic features (SRFs) have provided scientists and land managers with insight into relative soil moisture for approximately 60 years. The overall objective of this study was to develop a new method of SRF identification and quantification from soil cores using a digital camera and imag...

  13. Magnetic resonance imaging features of extremity sarcomas of uncertain differentiation

    International Nuclear Information System (INIS)

    Stacy, G.S.; Nair, L.

    2007-01-01

    The purpose of this review is to illustrate the pertinent clinical and imaging features of extremity sarcomas of uncertain differentiation, including synovial sarcoma, epithelioid sarcoma, clear-cell sarcoma, and alveolar soft part sarcoma. These tumours should be considered in the differential diagnosis when a soft-tissue mass is encountered in the extremity of an adolescent or young adult

  14. Monitoring cardiac stress using features extracted from S₁ heart sounds.

    Science.gov (United States)

    Herzig, Jonathan; Bickel, Amitai; Eitan, Arie; Intrator, Nathan

    2015-04-01

    It is known that acoustic heart sounds carry significant information about the mechanical activity of the heart. In this paper, we present a novel type of cardiac monitoring based on heart sound analysis. Specifically, we study two morphological features and their associations with physiological changes from the baseline state. The framework is demonstrated on recordings during laparoscopic surgeries of 15 patients. Insufflation, which is performed during laparoscopic surgery, provides a controlled, externally induced cardiac stress, enabling an analysis of each patient with respect to their own baseline. We demonstrate that the proposed features change during cardiac stress, and the change is more significant for patients with cardiac problems. Furthermore, we show that other well-known ECG morphology features are less sensitive in this specific cardiac stress experiment.

  15. Point features extraction: towards slam for an autonomous underwater vehicle

    CSIR Research Space (South Africa)

    Matsebe, O

    2010-07-01

    Full Text Available at different viewing angles, the navigation features should not be close to other strong sonar reflectors. Spatial Compactness: The feature should be observed over a narrow bearing range when observed with a range bearing sonar for it to be small enough... to objects in the environment. The bearing information corresponding to HIR scan line and the current vehicle pose is also stored. The Range Buffer is then differentiated to form a new buffer (Difference Buffer, iD ). The thi element of the Difference...

  16. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  17. Automated Tongue Feature Extraction for ZHENG Classification in Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Ratchadaporn Kanawong

    2012-01-01

    Full Text Available ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.

  18. Smart imaging for power-efficient extraction of Viola-Jones local descriptors

    Science.gov (United States)

    Fernández-Berni, J.; Carmona-Galán, R. A.; del Río, R.; Leñero-Bardallo, Juan A.; Suárez-Cambre, M.; Rodríguez-Vázquez, Á.

    2014-03-01

    In computer vision, local descriptors permit to summarize relevant visual cues through feature vectors. These vectors constitute inputs for trained classifiers which in turn enable different high-level vision tasks. While local descriptors certainly alleviate the computation load of subsequent processing stages by preventing them from handling raw images, they still have to deal with individual pixels. Feature vector extraction can thus become a major limitation for conventional embedded vision hardware. In this paper, we present a power-efficient sensing processing array conceived to provide the computation of integral images at different scales. These images are intermediate representations that speed up feature extraction. In particular, the mixed-signal array operation is tailored for extraction of Haar-like features. These features feed the cascade of classifiers at the core of the Viola-Jones framework. The processing lattice has been designed for the standard UMC 0.18μm 1P6M CMOS process. In addition to integral image computation, the array can be reprogrammed to deliver other early vision tasks: concurrent rectangular area sum, block-wise HDR imaging, Gaussian pyramids and image pre-warping for subsequent reduced kernel filtering.

  19. The extraction of coastal windbreak forest information based on UAV remote sensing images

    Science.gov (United States)

    Shang, Weitao; Gao, Zhiqiang; Jiang, Xiaopeng; Chen, Maosi

    2017-09-01

    Unmanned aerial vehicle(UAV) have been increasingly used for natural resource applications in recent years as a result of their greater availability, the miniaturization of sensors, and the ability to deploy UAV relatively quickly and repeatedly at low altitudes. UAV remote sensing offer rich contextual information, including spatial, spectral and contextual information. In order to extract the information from these UAV remote sensing images, we need to utilize the spatial and contextual information of an object and its surroundings. If pixel based approaches are applied to extract information from such remotely sensed data, only spectral information is used. Thereby, in Pixel based approaches, information extraction is based exclusively on the gray level thresholding methods. To extract the certain features only from UAV remote sensing images, this situation becomes worse. To overcome this situation an object-oriented approach is implemented. By object-oriented thought, the coastal windbreak forest information are extracted by the use of UAV remote sensing images. Firstly, the images are segmented. And then the spectral information and object geometry information of images objects are comprehensively applied to build the coastal windbreak forest extraction knowledge base. Thirdly, the results of coastal windbreak forest extraction are improved and completed. The results show that better accuracy of coastal windbreak forest extraction can be obtained by the proposed method, in contrast to the pixel-oriented method. In this study, the overall accuracy of classified image is 0.94 and Kappa accuracy is 0.92.

  20. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  1. Residual signal feature extraction for gearbox planetary stage fault detection

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Ursin, Thomas; Sweeney, Christian Walsted

    2017-01-01

    , statistical features measuring the signal energy and Gaussianity are calculated from the residual signals between each pair from the first to the fifth tooth mesh frequency of the meshing process in a multi-stage wind turbine gearbox. The suggested algorithm includes resampling from time to angular domain...

  2. Self-organizing networks for extracting jet features

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Pi, H.; Roegnvaldsson, T.

    1991-01-01

    Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b.c. and light quarks. (orig.)

  3. A Methodology for Texture Feature-based Quality Assessment in Nucleus Segmentation of Histopathology Image.

    Science.gov (United States)

    Wen, Si; Kurc, Tahsin M; Gao, Yi; Zhao, Tianhao; Saltz, Joel H; Zhu, Wei

    2017-01-01

    Image segmentation pipelines often are sensitive to algorithm input parameters. Algorithm parameters optimized for a set of images do not necessarily produce good-quality-segmentation results for other images. Even within an image, some regions may not be well segmented due to a number of factors, including multiple pieces of tissue with distinct characteristics, differences in staining of the tissue, normal versus tumor regions, and tumor heterogeneity. Evaluation of quality of segmentation results is an important step in image analysis. It is very labor intensive to do quality assessment manually with large image datasets because a whole-slide tissue image may have hundreds of thousands of nuclei. Semi-automatic mechanisms are needed to assist researchers and application developers to detect image regions with bad segmentations efficiently. Our goal is to develop and evaluate a machine-learning-based semi-automated workflow to assess quality of nucleus segmentation results in a large set of whole-slide tissue images. We propose a quality control methodology, in which machine-learning algorithms are trained with image intensity and texture features to produce a classification model. This model is applied to image patches in a whole-slide tissue image to predict the quality of nucleus segmentation in each patch. The training step of our methodology involves the selection and labeling of regions by a pathologist in a set of images to create the training dataset. The image regions are partitioned into patches. A set of intensity and texture features is computed for each patch. A classifier is trained with the features and the labels assigned by the pathologist. At the end of this process, a classification model is generated. The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is applied to each patch to predict the patch's label. The proposed methodology has been

  4. Autoscope: automated otoscopy image analysis to diagnose ear pathology and use of clinically motivated eardrum features

    Science.gov (United States)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yu, Lianbo; Gurcan, Metin

    2017-03-01

    In this study, we propose an automated otoscopy image analysis system called Autoscope. To the best of our knowledge, Autoscope is the first system designed to detect a wide range of eardrum abnormalities by using high-resolution otoscope images and report the condition of the eardrum as "normal" or "abnormal." In order to achieve this goal, first, we developed a preprocessing step to reduce camera-specific problems, detect the region of interest in the image, and prepare the image for further analysis. Subsequently, we designed a new set of clinically motivated eardrum features (CMEF). Furthermore, we evaluated the potential of the visual MPEG-7 descriptors for the task of tympanic membrane image classification. Then, we fused the information extracted from the CMEF and state-of-the-art computer vision features (CVF), which included MPEG-7 descriptors and two additional features together, using a state of the art classifier. In our experiments, 247 tympanic membrane images with 14 different types of abnormality were used, and Autoscope was able to classify the given tympanic membrane images as normal or abnormal with 84.6% accuracy.

  5. Extraction of terrain features from digital elevation models

    Science.gov (United States)

    Price, Curtis V.; Wolock, David M.; Ayers, Mark A.

    1989-01-01

    Digital elevation models (DEMs) are being used to determine variable inputs for hydrologic models in the Delaware River basin. Recently developed software for analysis of DEMs has been applied to watershed and streamline delineation. The results compare favorably with similar delineations taken from topographic maps. Additionally, output from this software has been used to extract other hydrologic information from the DEM, including flow direction, channel location, and an index describing the slope and shape of a watershed.

  6. Selection of the best features for leukocytes classification in blood smear microscopic images

    Science.gov (United States)

    Sarrafzadeh, Omid; Rabbani, Hossein; Talebi, Ardeshir; Banaem, Hossein Usefi

    2014-03-01

    Automatic differential counting of leukocytes provides invaluable information to pathologist for diagnosis and treatment of many diseases. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and classify them into their types: Neutrophil, Eosinophil, Basophil, Lymphocyte and Monocyte using features that pathologists consider to differentiate leukocytes. Features contain color, geometric and texture features. Colors of nucleus and cytoplasm vary among the leukocytes. Lymphocytes have single, large, round or oval and Monocytes have singular convoluted shape nucleus. Nucleus of Eosinophils is divided into 2 segments and nucleus of Neutrophils into 2 to 5 segments. Lymphocytes often have no granules, Monocytes have tiny granules, Neutrophils have fine granules and Eosinophils have large granules in cytoplasm. Six color features is extracted from both nucleus and cytoplasm, 6 geometric features only from nucleus and 6 statistical features and 7 moment invariants features only from cytoplasm of leukocytes. These features are fed to support vector machine (SVM) classifiers with one to one architecture. The results obtained by applying the proposed method on blood smear microscopic image of 10 patients including 149 white blood cells (WBCs) indicate that correct rate for all classifiers are above 93% which is in a higher level in comparison with previous literatures.

  7. Improving features used for hyper-temporal land cover change detection by reducing the uncertainty in the feature extraction method

    CSIR Research Space (South Africa)

    Salmon, BP

    2017-07-01

    Full Text Available the effect which the length of a temporal sliding window has on the success of detecting land cover change. It is shown using a short Fourier transform as a feature extraction method provides meaningful robust input to a machine learning method. In theory...

  8. Imaging features of intracerebral hemorrhage with cerebral amyloid angiopathy: Systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Neshika Samarasekera

    Full Text Available We sought to summarize Computed Tomography (CT/Magnetic Resonance Imaging (MRI features of intracerebral hemorrhage (ICH associated with cerebral amyloid angiopathy (CAA in published observational radio-pathological studies.In November 2016, two authors searched OVID Medline (1946-, Embase (1974- and relevant bibliographies for studies of imaging features of lobar or cerebellar ICH with pathologically proven CAA ("CAA-associated ICH". Two authors assessed studies' diagnostic test accuracy methodology and independently extracted data.We identified 22 studies (21 cases series and one cross-sectional study with controls of CT features in 297 adults, two cross-sectional studies of MRI features in 81 adults and one study which reported both CT and MRI features in 22 adults. Methods of CAA assessment varied, and rating of imaging features was not masked to pathology. The most frequently reported CT features of CAA-associated ICH in 21 case series were: subarachnoid extension (pooled proportion 82%, 95% CI 69-93%, I2 = 51%, 12 studies and an irregular ICH border (64%, 95% CI 32-91%, I2 = 85%, five studies. CAA-associated ICH was more likely to be multiple on CT than non-CAA ICH in one cross-sectional study (CAA-associated ICH 7/41 vs. non-CAA ICH 0/42; χ2 = 7.8, p = 0.005. Superficial siderosis on MRI was present in 52% of CAA-associated ICH (95% CI 39-65%, I2 = 35%, 3 studies.Subarachnoid extension and an irregular ICH border are common imaging features of CAA-associated ICH, but methodologically rigorous diagnostic test accuracy studies are required to determine the sensitivity and specificity of these features.